Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The MOTP metrics is showing a low score. #118

Open
nikeshdevkota opened this issue Jan 4, 2024 · 5 comments
Open

The MOTP metrics is showing a low score. #118

nikeshdevkota opened this issue Jan 4, 2024 · 5 comments

Comments

@nikeshdevkota
Copy link

nikeshdevkota commented Jan 4, 2024

Hi, I am using this reference code to do simultaneous detection and tracking on small objects. The average precision and average recall of the detection model shows a good perfromance but when it comes to tracking the MOTP score is very low? Any suggestions on how I can improve the performance ? Also, the MOTA and other metrics are high, so I can't figure out where the prblem actually lies ? @timmeinhardt

IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.660
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.946
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.808
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.660
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.692
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
image

INFO - root - mergeOverall: 0.022 seconds.
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm
Train_Moffat 99.0% 100.0% 98.0% 98.0% 100.0% 1 1 0 0 0 10 0 0 98.0% 0.086 0 0 0
OVERALL 99.0% 100.0% 98.0% 98.0% 100.0% 1 1 0 0 0 10 0 0 98.0% 0.086 0 0 0
image

Currently, I am only using a single object per image but I will change it to multiple objects when I get improvement on a single object tracking.

@timmeinhardt
Copy link
Owner

Have you tried optimizing the tracking thresholds, for example, detection_obj_score_thresh and track_obj_score_thresh. Or visualizing the output? This should give you a good idea of what is wrong.

@nikeshdevkota
Copy link
Author

I will do that and check if there is any improvement. Is there any other way to find the optimal tracking thresholds besides manually changing it with presumption?

@timmeinhardt
Copy link
Owner

You could write a script to find the optimal hyperparameters. But there is no analytic way to find them. At first, I would visualize your outputs to understand whats happening. This could give you an idea what parameters and how to change them.

@nikeshdevkota
Copy link
Author

Hi, I tried to use Visdom to visualize the training and evaluation metrics as suggested in the documents, but the Visdom server is showing a blank blue screen.
image
I started the Visdom server at port 8097 by running "visdom" in the terminal. I then changed the port number in the config file as well.
image
To verify the code is connected to the Visdom server, the "Webscoket connected" information is available.
image
But when I look at the logs at logs/visdom, the folder is empty.
image

I tried to visualize the bounding box location during validation, and the prediction worked quite well. But for the test data, the prediction is random.

During the training phase, I only used validation data for tracking and didn't use any test data. The validation data itself is in a sequential format. I wanted the test data to be unseen during training.
image
The -1.23 values are obtained for the test data.

image
for the validation data, the predictions are quite good.
@timmeinhardt any idea what's causing these issues ?

@nikeshdevkota
Copy link
Author

val_eval
val_loss
newplot (6)

I managed to load results from visdom. BUt I still can't figure out why the tracking woks on validation data but not the test data.
@timmeinhardt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants