Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于 Code/4_viewer/6_hook_for_grad_cam.py 中comp_class_vec计算Loss的疑惑 #44

Open
thgpddl opened this issue Apr 10, 2022 · 1 comment

Comments

@thgpddl
Copy link

thgpddl commented Apr 10, 2022

源代码如下:

def comp_class_vec(ouput_vec, index=None):
    if not index:
        index = np.argmax(ouput_vec.cpu().data.numpy()) # int
    else:
        index = np.array(index)
    index = index[np.newaxis, np.newaxis]   # (1,1) ndarray
    index = torch.from_numpy(index)     # (1,1) Tensor
    one_hot = torch.zeros(1, 1000).scatter_(1, index, 1)    # 热编码   (1,1000) Tensor  全0和一个和1
    one_hot.requires_grad = True
    class_vec = torch.sum(one_hot * output)  # 求损失

    return class_vec

按照我对该Loss计算方法的理解,

比如5分类ouput_vec最大最大概率为pos=3的类别ouput_vec=[0.1,0.1,0.6,0.1,0.1]
one_hot = [0,0,1,0,0]
计算torch.sum(one_hot * output)=0.6

如果pos=3类别的概率更高,计算出的torch.sum(one_hot * output)会越大。但是按直观来理解,网络判断正确的概率更高了,所以Loss应该更低才对啊?

@TingsongYu
Copy link
Owner

TingsongYu commented Apr 15, 2022

此处不是损失函数的概念,而是激活值的概念,利用激活值反向传播回去。所以是应该越大的,这个参考论文
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants