You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I follow the instructions in README to evaluate the models provided in your repo. However, the AP I got for yolos_ti .pth, yolos_s_200_pre.pth, yolos_s_300_pre.pth, yolos_s_dWr.pth, and yolos_base.pth are 28.7, 12.5, 12.7, 13.2, and 13.8, respectively. While yolos_ti.pth matches the performance in your paper and log, other four models are significantly lower than what's expected.
Any idea why this would happen? Thanks in advance!
For example, when evaluating the base model, I ran
Hi~@encounter1997, thanks for your interest in YOLOS and thanks for pointing out this issue :)
The codebase of YOLOS is built upon DETR's codebase, so there is a "bug" inherit from DETR: you need to set the num_GPU and batchsize_per_GPU during evaluation the same as during training. E.g., the num_GPU = 8 & batchsize_per_GPU = 1 for YOLOS-Small & YOLOS-Base.
It seems that you set batchsize_per_GPU = 2 during evaluation, which results in AP degeneration.
Hi, thanks for the excellent work!
I follow the instructions in README to evaluate the models provided in your repo. However, the AP I got for yolos_ti .pth, yolos_s_200_pre.pth, yolos_s_300_pre.pth, yolos_s_dWr.pth, and yolos_base.pth are 28.7, 12.5, 12.7, 13.2, and 13.8, respectively. While yolos_ti.pth matches the performance in your paper and log, other four models are significantly lower than what's expected.
Any idea why this would happen? Thanks in advance!
For example, when evaluating the base model, I ran
and was expected to obtain a 42.0 AP performance, as shown in your paper and log. However, the result is only 13.8 AP.
The complete evaluation output is shown below.
The text was updated successfully, but these errors were encountered: