Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_chained_min: relax test constraints #366

Merged
merged 1 commit into from
Mar 14, 2023
Merged

Conversation

stephanlachnit
Copy link
Contributor

This test fails on ppc64el otherwise, see e.g. https://ci.debian.net/data/autopkgtest/testing/ppc64el/s/symfit/32008561/log.gz (search for test_chained_min).

This test fails on ppc64el otherwise.
@pckroon
Copy link
Collaborator

pckroon commented Mar 7, 2023

Thanks. I have no issues with this at all. @tBuLi any concerns?

I would really appreciate your input on how to deal with these type numerical testing issues though, at a more general level. Does it make sense to just relax them until it works, or does that simply invalidate the tests?

@Jhsmit
Copy link
Collaborator

Jhsmit commented Mar 7, 2023

We could use something like skipif to run numerical tests with strict constraints only on one platform and python version?

@stephanlachnit
Copy link
Contributor Author

I would really appreciate your input on how to deal with these type numerical testing issues though, at a more general level. Does it make sense to just relax them until it works, or does that simply invalidate the tests?

I guess it depends a bit and what you want to test/achieve with it. In this case I think it sounds reasonable, given that this is pretty much within what you can except with floating point precision.

When we talk about #348, this is argumentation however is not really valid, the test just refuses to work on i386 at all (and I'm still not entirely sure why).

I guess it boils down to: do you except the tests to show that it works in general? If so, then relaxing the requirements seems reasonable if the result is not too far off like in this case.

If you want to test the performance of the fitters, then relaxing requirements until it works does indeed invalidate the purpose of the tests. I don't know a good solution for this tbh, since I think a "failure" in that case is not critical as in the software in broken. But if these tests are there to test the performance, then running them as CI check (that has to pass) does not make a lot of sense.

@pckroon
Copy link
Collaborator

pckroon commented Mar 13, 2023

I guess it boils down to: do you except the tests to show that it works in general?

Great question :)
We (I at least) do not want to test the scipy optimizers. I do want to test the symfit side of code. I guess the summary is that our current tests are rather integrative in nature, which is... not ideal.
My vote is that this change is OK; and we need to revisit our tests (but who has the time...)

@tBuLi Please merge and release when able :)

@tBuLi
Copy link
Owner

tBuLi commented Mar 14, 2023

I agree that we should only be testing our symfit code, not scipy. Unfortunately this is a bit hard to untangle, and there are a lot of tests that are indeed kind of written as performance tests even though their goal is just to make sure our code is executed correctly.

So I have no problem with this PR, improving the tests one step at a time ;). I don't think we need a new release just for this though, or do we?

@tBuLi tBuLi merged commit c6bf571 into tBuLi:master Mar 14, 2023
@stephanlachnit stephanlachnit deleted the patch-1 branch March 14, 2023 14:03
@stephanlachnit
Copy link
Contributor Author

So I have no problem with this PR, improving the tests one step at a time ;). I don't think we need a new release just for this though, or do we?

Nah that's fine, thx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants