New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When could we expect to see code #1
Comments
+1 |
Thanks for the attention. Because of covid-19, everything is unfortunately slowed down. We are working on it and plan to release the code this summer. Thanks for understanding. |
@jason718 Any update? Seeing as CVPR is now happening and the paper is unfortunately scant on many implementation details (telling us to see the released code, which has not been released). |
Bump. Really hoping this isn't yet another un-reproducible paper that leaves details to code that is never released |
Sorry for late, and thanks for the attention. We are preparing it, and also incorporating a follow-up paper into the same repo. The current plan is to wait till eccv results come out and we will release the code for both paper. |
In that case, would you mind sharing some details that are absent from the paper? I will list below some questions:
|
|
If you change the output of the second conv layer to 1 channel, how do you handle the residual connection? Since the input to the residual block is still HxHxC, and the output is HxHx1, the only way to add them is to allow broadcasting and then you still end up HxHxC? |
Hi, are your codes implemented with pytorch? |
A 1x1 conv is used on the residual connection, the same as in resnet. |
yes. |
Hi, will the code be released? @jason718 |
+1 |
Hello, when will the code be released? |
Hi @liuyuan11 @SongYii @youthHan, and many others, We want to thank for the attention of this work. We are sorry for the delay and are actively working for a high-quality code base. The purpose of this repo was to create a placeholder so that everyone knows where the code will appear in the first place. The delay is mostly due to:
We apologize for the unrigorous descriptions in the initial paper, and have updated the arxiv paper with more implementation details. Feel free to raise any technical questions here or email the authors. We are more than happy to answer. |
You say in the paper you take p percent of proposals, and then later you say p=0.15. Can I confirm that this means you take 0.15% (approximately 3) regions per image? And is there anything else with regards to the pseudo-labelling algorithm that is worth noting? I have a working implementation of both OICR and PCL, but have not been able to replicate even your MIST w/o Regression result with any set of hyperparameters (the original OICR, the original PCL, the latest PCL from pcl.pytorch, or the new parameters you've listed) |
@bradezard131 @jason718 I thought the "Self-training with regression" is from the main idea of the ICCV 2019 paper titled 'Towards Precise End-to-End Weakly Supervised Object Detection Network'. Please correct me if I am wrong. |
@bityangke You are mistaken, they take a different approach. I am referring to MIST W/O Reg. From Table 5, which should be a normal OICR model but with the MIST algorithm (Algorithm 1) |
@bradezard131 p=0.15 means top 15% percent. It takes more then enough proposals as the initial pool from which the pseudo-labels are further generated. As you said, setting p=0.15% will only pick ~3 ROIs, then a MIST w/o regression is almost the same as OICR (top-1). We apologize for the writing in paper. Please try p=15% in your code base. I'm also curious about the reason of why you cannot reproduce. If you still cannot solve it, can you start a new issue and share more information with us? From the current information you provide, I cannot tell what happened. |
@jason718 Also the paper is WSOD but in the Experiment - Dataset part, I think I only see you using full-annotated datasets, where is the image-level datasets mentioned detail ? |
@doduythao all the WSOD papers still use fully-annotated datasets (VOC, COCO), and I guess this is mainly due to the evaluation purpose. Otherwise, it's hard to compare to most detection works. I am not aware of any datasets specifically collected for WSOD tasks (image-level tags only for training set, and bounding boxes for test set). Feel free to share and suggest the ones you know. |
Hi, will you release code after ECCV? Thanks |
@BlueBlueFF that's the plan. |
@jason718 UFO2: A Unified Framework towards Omni-supervised Object Detection, Instance-aware, Context-focused, and Memory-efficient Weakly Supervised Object Detection. Nice works for wsod!! And will code release this month? |
Hey,
Super keen to see code for this. It's very interesting and I've started trying to reimplement it, however, the paper doesn't include all the details and says to reference the code. Any idea when we could expect to see it (so I don't keep coming back every day to check if it's updated)?
Cheers
The text was updated successfully, but these errors were encountered: