We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,很感谢您的分享。在读论文的时候关于训练loss有些不太明白,想请教下您: Loss的部分为什么可以用Mean-max(X)以及FL_(p)替代呢?这里不是很懂,可以说下推导过程么?谢谢您!
The text was updated successfully, but these errors were encountered:
1.Mean-max是一种选择机制,替换学习目标中的max,在训练的开始更平滑,最终学习目标等价于max。 2.FL(p)这里暂无推导的过程,只是用Focal Loss替换 -log(1 - p)交叉熵损失以解决负样本太多的问题。 欢迎致信 zhangxiaosong18@mails.ucas.ac.cn 进一步讨论!
Sorry, something went wrong.
恩恩,明白了,非常感谢您的回复!谢谢!
No branches or pull requests
您好,很感谢您的分享。在读论文的时候关于训练loss有些不太明白,想请教下您:
Loss的部分为什么可以用Mean-max(X)以及FL_(p)替代呢?这里不是很懂,可以说下推导过程么?谢谢您!
The text was updated successfully, but these errors were encountered: