Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About CORAL Loss #3

Open
ZhouWenjun2019 opened this issue Apr 17, 2021 · 2 comments
Open

About CORAL Loss #3

ZhouWenjun2019 opened this issue Apr 17, 2021 · 2 comments

Comments

@ZhouWenjun2019
Copy link

There maybe a wrong with CORAL Loss
loss = torch.norm(torch.mul((source_covariance-target_covariance), (source_covariance-target_covariance)), p="fro")
It should be
loss = torch.norm((source_covariance-target_covariance), p="fro")

@A-New-Page
Copy link

I agree with @ZhouWenjun2019

@agrija9
Copy link
Owner

agrija9 commented Apr 3, 2022

@ZhouWenjun2019, @A-New-Page,

According to the Deep CORAL paper (https://arxiv.org/pdf/1607.01719.pdf), the CORAL loss is defined as the squared matrix of the Frobenius norm (see equation 1, section 3.1).

My understanding when implementing this method is that if I just take

loss = torch.norm((source_covariance-target_covariance), p="fro")

I am just computing the Frobenius norm but not taking the squared multiplication into account. This means, I am just doing || • ||_F

See the definition of Frobenius Norm (https://mathworld.wolfram.com/FrobeniusNorm.html).

By adding torch.mul((source_covariance-target_covariance), (source_covariance-target_covariance)), I am making sure that I am computing the squared matrix Frobenius norm, i.e. || • ||²_F

Let me know your thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants