Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the LDAM Loss #13

Open
sakumashirayuki opened this issue Dec 1, 2020 · 4 comments
Open

About the LDAM Loss #13

sakumashirayuki opened this issue Dec 1, 2020 · 4 comments

Comments

@sakumashirayuki
Copy link

sakumashirayuki commented Dec 1, 2020

Thanks for your code a lot!
I have read your paper and code, it's really a good idea, but here I have a question about LDAM Loss. It's in the last line where we call the basic cross_entropy function in pytorch.

    def forward(self, x, target):
        index = torch.zeros_like(x, dtype=torch.uint8)
        index.scatter_(1, target.data.view(-1, 1), 1)

        index_float = index.type(torch.cuda.FloatTensor)
        # self.m_list[None, :] add one dimension to the origin m_list
        batch_m = torch.matmul(self.m_list[None, :], index_float.transpose(0, 1))
        # equivalently transpose
        batch_m = batch_m.view((-1, 1))
        x_m = x - batch_m
        # only the target labelpostion is x_m
        output = torch.where(index, x_m, x)
        return F.cross_entropy(self.s * output, target, weight=self.weight)

why the output is multiplied by s(here is 30 times), just to make the loss greater? However, we didn't do this to the Focal loss

@lipingcoding
Copy link

the same question

@jinwon-samsung
Copy link

same question here as well

@xiangly55
Copy link

same question here

@tkasarla
Copy link

tkasarla commented Apr 10, 2022

I'm not sure if anyone found an explanation to this but I also have the same question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants