Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CPP] Remove zero diameter check for edges in 0-dim computation #43

Merged
merged 6 commits into from
Nov 8, 2021

Conversation

ulupo
Copy link
Collaborator

@ulupo ulupo commented Nov 8, 2021

Fixes #31, adding a unit test for "equivariance" (the property I was checking for in the example in #31) which fails in main but not after the fix.

New version of #41.

This was due to adding non-finite diagonal elements to themselves when doing dm + dm.T, but we can avoid having those elements in the first place
@ulupo ulupo added the bug Something isn't working label Nov 8, 2021
gph/src/ripser.h Show resolved Hide resolved
gph/python/test/test_ripser.py Show resolved Hide resolved
gph/python/test/test_ripser.py Outdated Show resolved Hide resolved
gph/src/ripser.h Outdated Show resolved Hide resolved
gph/src/ripser.h Outdated Show resolved Hide resolved
@ulupo
Copy link
Collaborator Author

ulupo commented Nov 8, 2021

@MonkeyBreaker the test is good, but unlike what I confidently stated above it does not fail (or fails so rarely I couldn't make it fail) on current main. Evidently hypothesis does not produce something like my example in the discussion in #31 (for which I had to pick a specific random seed).

What I am thinking is:

  • keep the current test as is -- we like hypothesis
  • add my specific example by hand at the end of the test
    • make sure once again that this addition does fail in main

@ulupo ulupo changed the title Remove zero diameter check for edges in 0-dim computation [CPP] Remove zero diameter check for edges in 0-dim computation Nov 8, 2021
@MonkeyBreaker
Copy link
Collaborator

keep the current test as is -- we like hypothesis

Well we must be confident that our test fails if we break something.
I am not a fan of having test that won't fail at each time that we run if we broke the corresponding feature tested.

add my specific example by hand at the end of the test

Well, instead of having at the end of the test, implement 2 test, one with hypothesis that produces multiple tests.
One with the hard-coded input data that we know fails if the fix is not implemented.
I the input matrix is small, I would hard-code the matrix directly, because if we depend on the behavior of np.seed() it is not IMO reliable enough.

make sure once again that this addition does fail in main

Sure, I can also cross-validate

@ulupo
Copy link
Collaborator Author

ulupo commented Nov 8, 2021

Well we must be confident that our test fails if we break something.
I am not a fan of having test that won't fail at each time that we run if we broke the corresponding feature tested.

Not sure I follow -- this would definitely happen after my edits. But it is the same test -- for the same property. Testing with hypothesis covers tricky things like overflows, etc, while the hard-coded example makes sure we don't regress. But it is the same property (equivariance) that is being tested.

@ulupo
Copy link
Collaborator Author

ulupo commented Nov 8, 2021

On the other hand, I'm happy to split the tests -- e.g. because otherwise the same specific dm will be tested multiple times, one per run made by hypothesis. This alone is a good reason for me to split!

@ulupo
Copy link
Collaborator Author

ulupo commented Nov 8, 2021

@MonkeyBreaker I have now added the separate explicit regression test as discussed. I checked that it fails on main currently.

Let me know if you'd like further changes for this PR.

@MonkeyBreaker
Copy link
Collaborator

Not sure I follow -- this would definitely happen after my edits. But it is the same test -- for the same property. Testing with hypothesis covers tricky things like overflows, etc, while the hard-coded example makes sure we don't regress. But it is the same property (equivariance) that is being tested.

I am good that we have 2 separate test for validating a feature. I was afraid that with only 1 test (without the case we are always sure it fails), we could not really with certitude if all test are good.

But now, with both of the test implemented, I am happy :)

@MonkeyBreaker
Copy link
Collaborator

LGTM !
I could reproduce the issue on my computer with your test and after adding the fix, all test passes !

@ulupo ulupo merged commit e302bf6 into giotto-ai:main Nov 8, 2021
@ulupo ulupo deleted the fix_zero_diameter_edges_0_dim branch November 8, 2021 12:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Dead comment in compute_dim_0_pairs/incorrect removal of edges with zero filtration value
2 participants