You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 27, 2024. It is now read-only.
Hey hi,
I have a question regarding the evaluation metric.
How do you calculate mean average precision for activity prediction? Is it same as pascal VOC mAP
evaluation metric or some other technique?
I see in the code that score and labels are only considered. What about the bounding boxes?
The text was updated successfully, but these errors were encountered:
The activity prediction mAP is computed over each person. There is no bounding box prediction. Given the observations of a person, the model outputs a multi-class probability of all actions, which is used to compare with the ground truth. See problem formulation in Section 3 of the paper. mAP is originally proposed in the Information Retrieval field. See the definition here.
Hey hi,
I have a question regarding the evaluation metric.
How do you calculate mean average precision for activity prediction? Is it same as pascal VOC mAP
evaluation metric or some other technique?
I see in the code that score and labels are only considered. What about the bounding boxes?
The text was updated successfully, but these errors were encountered: