We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
源码如下:
def optimal_threshold(y_true, y_pred): """最优阈值的自动搜索 """ loss = lambda t: -np.mean((y_true > 0.5) == (y_pred > np.tanh(t))) result = minimize(loss, 1, method='Powell') return np.tanh(result.x), -result.fun
老师好,想问问为什么powell里边针对Acc的最小值优化需要加个np.tanh(t),如果直接为 (y_pred > t) 为什么不好呢? 想知道两个的区别,想了一晚上没想明白,希望老师能点一下思路
The text was updated successfully, but these errors were encountered:
y_pred是-1到1之间的,t是无约束的,加个tanh让它变到-1到1之间,避免得到没意义的结果(一旦t出界,那么浪费算力不说,还容易崩溃)
Sorry, something went wrong.
学习到了,蟹蟹老师(刚买了新梯子恢复了魔法就立马跑过来回复)
No branches or pull requests
源码如下:
def optimal_threshold(y_true, y_pred):
"""最优阈值的自动搜索
"""
loss = lambda t: -np.mean((y_true > 0.5) == (y_pred > np.tanh(t)))
result = minimize(loss, 1, method='Powell')
return np.tanh(result.x), -result.fun
老师好,想问问为什么powell里边针对Acc的最小值优化需要加个np.tanh(t),如果直接为 (y_pred > t) 为什么不好呢? 想知道两个的区别,想了一晚上没想明白,希望老师能点一下思路
The text was updated successfully, but these errors were encountered: