You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in how to calculate recall and precision values for custom training models. I'm currently training a custom classifier to identify eight frog species based on the pre-trained BirdNET model. It would be helpful to know the performance metrics (precision and recall) for each of the eight frog species.
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
accuracy 0.60 5
macro avg 0.50 0.56 0.49 5
weighted avg 0.70 0.60 0.61 5
While BirdNET provides segments.py for output verification, it doesn't seem to offer information about recall.
Although it might be straitforward to modify the existing train.py script to save the classification report, it was difficult for me to figure out how validation labels and predictions were stored during the training process.
Thank you!
The text was updated successfully, but these errors were encountered:
Generally speaking, you could add your own metrics in model.py. But we might also add a table after training is complete, with some of the most common metrics. @kahst what do you think?
Hello,
I'm interested in how to calculate recall and precision values for custom training models. I'm currently training a custom classifier to identify eight frog species based on the pre-trained BirdNET model. It would be helpful to know the performance metrics (precision and recall) for each of the eight frog species.
What I intended is something like sklearn.metrics.classification_report, which displays precision and recall for all target classes.
Example
While BirdNET provides segments.py for output verification, it doesn't seem to offer information about recall.
Although it might be straitforward to modify the existing train.py script to save the classification report, it was difficult for me to figure out how validation labels and predictions were stored during the training process.
Thank you!
The text was updated successfully, but these errors were encountered: