-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about ZegCLIP training #7
Comments
I appreciate your interest in our work. Could you please confirm that the mask weight you used in
Besides, a widely used trick in many previous works that slightly reduces the logits on seen classes is helpful in the inductive setting. I have set the parameter to 0.1. Did it also use in your inference? Or you may try to change the factor to see the difference. |
Thank for your eager reply, this phenomenon is caused by the fact that I ignored the number of iterations of the model. In the MMSeg, batchSize = GPUNum * samples_per_gpu. Your paper has mentioned that it is using 4 GPUs for training, and I ignored this condition. I was using only a single card, so the amount of training data was only 1/4 of yours. After I made up the full number of training sessions, the method performance was significantly improved. However, it is still slightly less effective than the paper by 1-2 points. I think this is because the number of trainings increased exponentially and I didn't change the super parameters like learning rate accordingly. Thank you once again! |
Thank you for your feedback. |
@Qyizos Hi, I am trying to validate the results for cocostuff164k but i got very good results for 11 classes but for all the rest its zero. Can you guide what am i missing here. I run the code with only updated datapath and rest of repo code was same |
I am very happy to see your work on ZegCLIP, it is very interesting and very helpful for me. I'm having a little trouble with your code.
I used the pth file you provided for inference and got results consistent with the paper. However, I use the same docker environment and source code for training, but there is a certain deviation in the inference results obtained.When running Inductive setting under the VOC dataset, my experimental results differ from yours by less than 2 points. However, when running Inductive setting under the COCO dataset, there is a difference of as much as 7 points. The experimental results are shown in the attachment.
Can you help answer this question?
The text was updated successfully, but these errors were encountered: