-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add COCO evaluation metrics #111
Comments
Hi @NielsRogge, |
Great! Here's a notebook that illustrates how I'm using The evaluation is near the end of the notebook. |
I went through the code you've mentioned and I think there are 2 options on how we can go ahead:
In my opinion, 2nd option looks very clean but I'm still figuring out how's it transforming the box co-ordinates of |
Ok, thanks for the update. Indeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation. This file is probably want we need to implement. |
Hi @lvwerra Do you plan to add a 3rd party application for the COCO map metric? |
Is there any update on this? What would be the recommended way of doing COCO eval with Huggingface? |
Yes there's an update on this. @rafaelpadilla has been working on adding native support for COCO metrics in the evaluate library, check the Space here: https://huggingface.co/spaces/rafaelpadilla/detection_metrics. For now you have to load the metric as follows:
but this one is going to be integrated in the main This is then leveraged to create the open object detection leaderboard: https://huggingface.co/spaces/rafaelpadilla/object_detection_leaderboard. |
Yep, we intend to integrate to Meanwhile you can use from here https://huggingface.co/spaces/rafaelpadilla/detection_metrics Update: the code with the |
Hi,
results in the following error:
How do I load the metric from the hub? Do I need to download the content of that repository manually first? I'm running |
Ran into the same issue @maltelorbach posted on 12/14/2023 |
I spent some time digging into this. The issue is that the |
Yes for now we switched to using Torchmetrics as it already provides a performant implementation with support for distributed training etc. so no need to duplicate it. cc @qubvel |
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external
CocoEvaluator
andPanopticEvaluator
objects which are defined in the original repository (here and here respectively).Running these in a notebook gives you nice summaries like this:
![image](https://user-images.githubusercontent.com/48327001/116878842-326f0680-ac20-11eb-9061-d6da02193694.png)
It would be great if we could import these metrics from the Datasets library, something like this:
I think this would be great for object detection and semantic/panoptic segmentation in general, not just for DETR. Reproducing results of object detection papers would be way easier.
However, object detection and panoptic segmentation evaluation is a bit more complex than accuracy (it's more like a summary of metrics at different thresholds rather than a single one). I'm not sure how to proceed here, but happy to help making this possible.
The text was updated successfully, but these errors were encountered: