Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I think fcos has been discredited too much by you.(Why is the performance of FCOS so poor?) 感觉fcos被你们黑的好惨啊 #52

Closed
dreamlychina opened this issue Jul 12, 2021 · 4 comments
Labels
paper&experiment Question about paper and experiment

Comments

@dreamlychina
Copy link

dreamlychina commented Jul 12, 2021

fcos的指标跟其他paper结果比较差的太多了

The performance number of fcos is much worse than the results of other papers.

@yinglang yinglang changed the title 感觉fcos被你们黑的好惨啊 I think fcos has been discredited too much by you.(Why is the performance of FCOS so poor?) 感觉fcos被你们黑的好惨啊 Jul 13, 2021
@yinglang
Copy link
Contributor

yinglang commented Jul 13, 2021

Firsttly, We give our experiment conclusion:

  1. The origin FCOS is failed on tiny object detection(in many dataset), neither with MaskRCNN_Benchmark nor mmdetection, neither TinyPerson, Tiny Pascal VOC nor Tiny COCO.
  2. In an accidental experiment last year, we found if remove GN(group norm) in FCOS head, and fine tune some super arguments, it can achieve a comparable performance with other framework both in MaskRCNN_Benchmark and mmdetection. We have publish that on our mmdetection version
  3. We do not know why GN bring the evils for now.
  4. If you have do some experiments which results is much diffrent with our publish code, we are very thannks if you can share the results and the problem. We will try to fixed them if there are enough time.

Secondlly, We had not discredited FCOS.

  1. FCOS is a nice work. With ATSS,it made people realize the importance of label assign in detection. I like and admire the work.
  2. Why performance of FCOS in paper so poor? When we publish ScaleMatch and TinyPerson, we didn't find the GN problem. We truely tried everything we could think of at that time(such as adjusted leraing rate, batch size, implicit anchor size), But all those not work. And other guy met this problem too, it can also be confirmed by this issue.

Finally, we welcome everyone to discuss experiments or code issues and thanks for correcting our work. However, if you want to make a comment, you can go to Zhihu or other social platforms, I will try my best to give corresponding responses. In addition, since there are some researchers who do not understand Chinese, we encourage everyone to communicate in English.

Thanks

首先,我们给出了我们的实验结论:
1.原始FCOS在tiny object detection上是失败的,不论是MaskRCNN_Benchmark还是是mmdetection,不论在TinyPerson,tiny Pascal VOC还是tiny COCO上都是这样的。
2.在去年的一次偶然实验中,我们发现如果去掉FCOS头部的GN(groupnorm),并对一些超参数进行微调,无论是在MaskRCNN_Benchmark还是mmdetection上,都可以达到与其他框架相当的性能。我们已经在our mmdetection version上发布了配置。
3. 我们现在还不知道为什么GN会带来这个问题。
4. 如果你做了一些实验,结果与我们发布的代码有很大的不同,如果你能分享结果和问题,我们将非常感谢。 如果时间允许,我们会努力尝试解决它们。

第二,我们没有抹黑FCOS。
1.FCOS是一个不错的工作。和后面的ATSS一起,让大家认识到label assign在检测中的重要性。我喜欢并钦佩这项工作。
2.**为什么论文中FCO的性能这么差?**我们发布ScaleMatch和TinyPerson时,还没有发现GN问题。我们确实尝试了我们当时能想到的一切策略(比如调整learning rate、batch size、隐式anchor大小),但所有这些都不起作用。其他人也遇到了这个问题,这也可以通过这个issues得到证实。

最后,我们非常欢迎大家来就实验或者代码问题进行讨论,对我们的工作进行指正,但是其他的吐糟之类的希望可以去知乎等社交平台,我也尽量给出相应的回复。另外由于有一些不懂汉语的研究者了,我们鼓励大家使用英文进行交流。

@yinglang
Copy link
Contributor

This is a very good question, but it would be better if the questioner’s attitude could be more friendly : ).

@dreamlychina
Copy link
Author

dreamlychina commented Jul 13, 2021

首先,我想回复的几点就是:
(1)既然你说了FCOS,不管在哪个数据集上面训练都是失败的,那么你论文中的实验指标是怎么得来的???此处是否可以怀疑你们实验造假?如果实验失败,则完全可以不写fcos的指标,没有必要踩低吧,或者写明fcos不适合小目标检测即可。
(2)没有吐槽你的意思,因为我是看到你的论文实验,再对比其他论文实验 产生的质疑,因为指标不是差了10个点以下的指标。

First of all, I want to reply to the following points:
(1) Since you talked about FCOS, no matter which data set you are training on, it will fail. How did you get the experimental indicators in your paper? ? ? Can you suspect that your experiment is fraudulent here? If the experiment fails, you don't need to write the index of fcos at all, there is no need to step down, or write that fcos is not suitable for small target detection.
(2) I didn't mean to complain about you, because I saw your paper experiment and compared it with other paper experiments, because the index is not an index that is less than 10 points.

@yinglang
Copy link
Contributor

yinglang commented Jul 13, 2021

What I mean by Failed is not that the experiment has no results, but that it is easy to diverge, and even the convergence performance is far lower than the regular value.

If you run it many times, your also can get a performance.
Beside, without scale matching pretrain, RetinaNet also easy to diverge.

@yinglang yinglang added the paper&experiment Question about paper and experiment label Jul 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
paper&experiment Question about paper and experiment
Projects
None yet
Development

No branches or pull requests

2 participants