Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing ssd.py / train_pseudo137.py / train_pseudo151.py? #1

Closed
vadimkantorov opened this issue Feb 21, 2022 · 13 comments
Closed

Missing ssd.py / train_pseudo137.py / train_pseudo151.py? #1

vadimkantorov opened this issue Feb 21, 2022 · 13 comments

Comments

@vadimkantorov
Copy link

vadimkantorov commented Feb 21, 2022

Hi! Various train_* files include from ssd import build_ssd, but ssd.py is missing in this repo.

Is it a typo? Should it be instead from csd import build_ssd_con? Or are some files from ssd.pytorch missing in this repo?

What is the difference between csd.py and isd.py?

Also, train_pseudo137.py and train_pseudo151.py mentioned at https://github.com/machengcheng2016/CrossTeaching-SSOD#33-reproduce-table3 are missing from the repo...

Thanks!

@vadimkantorov vadimkantorov changed the title Missing ssd.py? Missing ssd.py / train_pseudo137.py / train_pseudo151.py? Feb 21, 2022
@machengcheng2016
Copy link
Owner

machengcheng2016 commented Feb 22, 2022

Hello, thanks for your attention.
I've just uploaded ssd.py, which is absolutely drawn from the original SSD repo ssd.pytorch.
Both csd.py and isd.py are drawn from the original ISD repo ISD-SSD. Both scripts are to create a SSD detector sharing the same architecture with ssd.py, so there is no difference between them with respect to this point. The function build_ssd_con additionally allows the SSD detector to output intermediate feature maps.
I've just uploaded train_pseudo137.py, please check it out.

@vadimkantorov
Copy link
Author

From what I understand, this repo contains at least two implementations of CrossTeaching. Can I refer just to detectron2 impl? Is it complete?

@machengcheng2016
Copy link
Owner

machengcheng2016 commented Feb 23, 2022

Yes, you can. The proposed cross-teaching is actually a training paradigm. So whatever the platform you choose to implement on, the core idea is always the same.

@vadimkantorov
Copy link
Author

This is good news. Thanks!

Yeah. I understand about the paradigm, was just wondering if the detectron2 impl is complete and fully represents description in the paper.

@machengcheng2016
Copy link
Owner

Never mind.
The core idea of cross-teaching is to rectify the possibly incorrect pseudo labels, through the "confidence comparison" operation as put in Eq.(8) in the manuscript. As one detector can never rectify the misclassified pseudo labels by itself (only have chance to discard some of them), it is necessary to involve another detector into the training paradigm.

@vadimkantorov
Copy link
Author

Do you sample half of the batch from the supervised subset as does unbiasedteacher codebase?

@machengcheng2016
Copy link
Owner

As far as I know, hyper-parameters (such as batch size, learning rate, and augmentations) are usually set differently among recent semi-supervised object detection papers. In fact, these settings are vital to the model performance. In my experiments, I choose to follow the hyper-parameters provided by the official detectron2 platform for fair comparison.

@vadimkantorov
Copy link
Author

Sampling in unbiasedteacher codebase is done at https://github.com/machengcheng2016/CrossTeaching-SSOD/blob/534b7f993e58d0c19f26871a073647267f70e311/detectron2/VOC07-sup-VOC12-unsup-self-teaching-0.7/ubteacher/data/common.py#L125

It samples half of the batch from supervised subset, and half of the batch from unsupervised subset. Then both halfs are subject to weak and strong augs...

@vadimkantorov
Copy link
Author

detectron2 directory has five different subfolders / impls. what are the difference? just configs?

@vadimkantorov
Copy link
Author

vadimkantorov commented Feb 23, 2022

Yeah I know that, so what's your question?

My question was whether crosstraining does the sampling the same way as the original ubteacher. Now I see that it does the same sampling as the original ubteacher. So no more open question about sampling

@machengcheng2016
Copy link
Owner

Oh I know where the problem is. Please check the script ubteacher/engine/trainer.py. I only utilize the strong aug data for training, since I want to avoid the effect brought by different augs between labeled and unlabeled batch data. In fact I've tested with strong/weak augs for supervised training, and I found strong aug can improve the supervised baseline mAP.

@machengcheng2016
Copy link
Owner

detectron2 directory has five different subfolders / impls. what are the difference? just configs?

These are only used in the COCO experiments. The differences between 5 configs only lie in the random seed. You can json.load the COCO_supervision.txt in the dataseed folder, and you will find out what is changed.

@vadimkantorov
Copy link
Author

I see! Would be great to have some recipes for the COCO experiments too..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants