Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get much lower AP on LVIS after directly following the instructions #63

Open
backseason opened this issue Nov 30, 2022 · 3 comments
Open

Comments

@backseason
Copy link

Dear authors,

By following the commands in https://github.com/microsoft/GLIP#lvis-evaluation using the provided config file and pretrained weights, the evaluation results on LVIS minival is much lower than the ones reported in README.

The command I used: CUDA_VISIBLE_DEVICES=3,4,5,6 python -m torch.distributed.launch --nproc_per_node=4 tools/test_grounding_net.py --config-file configs/pretrain/glip_Swin_T_O365_GoldG.yaml --task_config configs/lvis/minival.yaml --weight PRETRAINED/glip_tiny_model_o365_goldg.pth TEST.EVAL_TASK detection OUTPUT_DIR evals/lvis TEST.CHUNKED_EVALUATION 40 TEST.IMS_PER_BATCH 16 SOLVER.IMS_PER_BATCH 16 TEST.MDETR_STYLE_AGGREGATE_CLASS_NUM 3000 MODEL.RETINANET.DETECTIONS_PER_IMG 300 MODEL.FCOS.DETECTIONS_PER_IMG 300 MODEL.ATSS.DETECTIONS_PER_IMG 300 MODEL.ROI_HEADS.DETECTIONS_PER_IMG 300

The corresponding evaluation results using the config and weights of GLIP-T(C), who's APr should be either ~14.3 or ~17.7:
image

I have also tried the GLIP-T(A), but the results is also much lower. Do you have any suggestions about where I might haven't done correctly?

@felixfuu
Copy link

@backseason Have you solved the problem?

@JiuqingDong
Copy link

Dear authors,

By following the commands in https://github.com/microsoft/GLIP#lvis-evaluation using the provided config file and pretrained weights, the evaluation results on LVIS minival is much lower than the ones reported in README.

The command I used: CUDA_VISIBLE_DEVICES=3,4,5,6 python -m torch.distributed.launch --nproc_per_node=4 tools/test_grounding_net.py --config-file configs/pretrain/glip_Swin_T_O365_GoldG.yaml --task_config configs/lvis/minival.yaml --weight PRETRAINED/glip_tiny_model_o365_goldg.pth TEST.EVAL_TASK detection OUTPUT_DIR evals/lvis TEST.CHUNKED_EVALUATION 40 TEST.IMS_PER_BATCH 16 SOLVER.IMS_PER_BATCH 16 TEST.MDETR_STYLE_AGGREGATE_CLASS_NUM 3000 MODEL.RETINANET.DETECTIONS_PER_IMG 300 MODEL.FCOS.DETECTIONS_PER_IMG 300 MODEL.ATSS.DETECTIONS_PER_IMG 300 MODEL.ROI_HEADS.DETECTIONS_PER_IMG 300

The corresponding evaluation results using the config and weights of GLIP-T(C), who's APr should be either ~14.3 or ~17.7: image

I have also tried the GLIP-T(A), but the results is also much lower. Do you have any suggestions about where I might haven't done correctly?

Did you use the model from the model zoo? or train the model by yourself?

@SikaStar
Copy link

Dear authors,

By following the commands in https://github.com/microsoft/GLIP#lvis-evaluation using the provided config file and pretrained weights, the evaluation results on LVIS minival is much lower than the ones reported in README.

The command I used: CUDA_VISIBLE_DEVICES=3,4,5,6 python -m torch.distributed.launch --nproc_per_node=4 tools/test_grounding_net.py --config-file configs/pretrain/glip_Swin_T_O365_GoldG.yaml --task_config configs/lvis/minival.yaml --weight PRETRAINED/glip_tiny_model_o365_goldg.pth TEST.EVAL_TASK detection OUTPUT_DIR evals/lvis TEST.CHUNKED_EVALUATION 40 TEST.IMS_PER_BATCH 16 SOLVER.IMS_PER_BATCH 16 TEST.MDETR_STYLE_AGGREGATE_CLASS_NUM 3000 MODEL.RETINANET.DETECTIONS_PER_IMG 300 MODEL.FCOS.DETECTIONS_PER_IMG 300 MODEL.ATSS.DETECTIONS_PER_IMG 300 MODEL.ROI_HEADS.DETECTIONS_PER_IMG 300

The corresponding evaluation results using the config and weights of GLIP-T(C), who's APr should be either ~14.3 or ~17.7: image

I have also tried the GLIP-T(A), but the results is also much lower. Do you have any suggestions about where I might haven't done correctly?

Have you solved the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants