Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stopThr not used in empirical_sinkhorn_divergence #421

Closed
tlacombe opened this issue Dec 20, 2022 · 4 comments
Closed

stopThr not used in empirical_sinkhorn_divergence #421

tlacombe opened this issue Dec 20, 2022 · 4 comments

Comments

@tlacombe
Copy link
Contributor

Describe the bug

ot.bregman.empirical_sinkhorn_divergence has (as most functions in the module) a parameter stopThr to control the desired marginal error, with default value 1e-9. However, this functions calls empirical_sinkhorn2 with stopThr=1e-9 (hardcoded), so that the parameter stopThr of empirical_sinkhorn_divergence has no effect.

Code sample

import numpy as np
import ot

n, m = 10, 10
X = np.random.randn(n, 2)
Y = np.random.randn(m, 2)

ot.bregman.empirical_sinkhorn_divergence(X, Y, reg=1, verbose=True, stopThr=1e-2)

Outputs:

It.  |Err         
-------------------
    0|1.672739e-01|
   10|1.323828e-02|
   20|1.179009e-03|
   30|1.250032e-04|
   40|2.445311e-05|
   50|6.149783e-06|
   60|1.594631e-06|
   70|4.145982e-07|
   80|1.078188e-07|
   90|2.803949e-08|
  100|7.292000e-09|
  110|1.896371e-09|

Expected behavior

The iterations should stop when Err < stopThr, not 1e-9.

Additional information

A similar issue seems to occur in sinkhorn_epsilon_scaling where stopThr=1e-9 (hardcoded) in the subsequent call to sinkhorn_stabilized(..., stopThr=1e-9).

Fix proposition

Unless this behavior is on purpose (in which case it may be useful to document it), a reasonable fix is to change stopThr=1e-9 to stopThr=stopThr in the subcalls (6 instances in empirical_sinkhorn_divergence, 1 in sinkhorn_epsilon_scaling, as far as I can tell).

I can take care of the fix PR if this is a good solution (or let it to someone used to contribute to POT who can do that in 2 min).

@rflamary
Copy link
Collaborator

Hello @tlacombe you are absolutely right and we would welcome your PR.

@rflamary
Copy link
Collaborator

Also note that we do not exactly compute teh sinkhorn divergence because the losses retruned by sinkhorn2 and empirical_sinkhorn2 do not include the entropy term.

We should add a parameter to those function to return the full loss with entropy but in the meantime I would appreciate that you update teh ducumentation to reflect that we are not doing that at the moment.

@tlacombe
Copy link
Contributor Author

Ok I may do that as well, perhaps in a different PR to avoid confusion

@rflamary
Copy link
Collaborator

Closing this issue since the PR was merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants