Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix Catalyst using warning #944

Merged
merged 3 commits into from
Jun 10, 2024

Conversation

isaacsas
Copy link
Member

Hopefully fixes the warning when calling using Catalyst.

@TorkelE
Copy link
Member

TorkelE commented Jun 10, 2024

Thanks!

There are also these two which have been around for a while

WARNING: both Symbolics and ModelingToolkit export "infimum"; uses of it in module Catalyst must be qualified
WARNING: both Symbolics and ModelingToolkit export "supremum"; uses of it in module Catalyst must be qualified

which I really do not know how to fix. Chris suggested that it might have something with DomainSet to do, but I also see it when loading Catalyst normally sometimes, so not sure.

@isaacsas
Copy link
Member Author

Those aren't Catalyst related, and I don't see them currently. (There are other ones I am seeing from other packages, but we can't handle those.)

test/runtests.jl Outdated
@@ -66,6 +65,7 @@ using SafeTestsets, Test

# Tests extensions.
@time @safetestset "BifurcationKit Extension" begin include("extensions/bifurcation_kit.jl") end
@time @safetestset "Steady State Stability Computations" begin include("miscellaneous_tests/stability_computation.jl") end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TorkelE these tests use the HC extension, so they shouldn't be run before the extension section. They were much too early in the testing before I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It only really used it because I wanted a nice way to compute steady states of systems with multiple systems. I think it make sense to have the extension section only contain tests that actually tests the extensions? If you do not want tests that utilise that code before I could try and come up with another way to compute those steady states.

Copy link
Member Author

@isaacsas isaacsas Jun 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a minimum if we are using solvers like SteadyStateDiffEq I'd put these after the ODE/SDE/jump tests. So maybe after @time @safetestset "Nonlinear and SteadyState System Solving" begin include("simulation_and_solving/solve_nonlinear.jl") end?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely shouldn't be testing stability before we test solvers though since it uses their results.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also though, if it is relying on the extension's results as a way to validate stuff it should probably come after the tests that validate the extension itself.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to add -- I like these tests and agree that comparing with HC is a good idea. This is just about ensuring everything the test uses has been tested before this test is run.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is ust impossible to ensure that all tests follow line where everything is introduced only where it is tested (e.g. we are depending on running simulations in most test files, even before we tests that simulations are correct). In principle this might run two superfluous test blocks (if there turns out to be an HC error only caught when that extension is tested), but that seems excessive to move the test lines around?

If you want we can create an additional test block towards the end for "tangential features" of something, but I do think there is something to be gained by grouping related tests together (where admiteldy the common denominator of the misc tests being that they do not really have a common denominator).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree that it is impossible in general and we use solving throughout as part of various system tests. But in this case this is testing functionality that is clearly downstream of tests for the ability to accurately solve for steady-states. I moved it to its own section so it is no longer grouped as part of the extension tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good for now.

I think a better argument is that stability (and e.g. serialization) essentially are self-contained features that are less likely to break from changes, and thus should be tested much further back in the pipeline. Meanwhile, the API test maybe should be even earlier than they are.

Generally I do think we should keep the misc category. I think it can be useful for a new users who introduces some feature and tests. In this case they have a default category where they can stick their tests until further notice.

@isaacsas
Copy link
Member Author

isaacsas commented Jun 10, 2024

Looking at the test warnings the only one I don't know how to fix, or even what is causing it is

┌ Warning: dot() is deprecated, use the non-do-block form

I assume it is GraphViz_jll.related, but I can't find where this warning is even created.

The remaining warnings are all I believe from packages we depend on. Primarily SteadyStateDiffEq.

@TorkelE
Copy link
Member

TorkelE commented Jun 10, 2024

I wen through the warnings in the previous PR and couldn't really figure them out either. I did some testing especially on the SteadyState ones, but couldn't reproduce them when I ran them just in a normal script, so not sure exactly what to do.

@isaacsas
Copy link
Member Author

Did you turn warnings on via the command line flag when you ran your normal script?

@TorkelE
Copy link
Member

TorkelE commented Jun 10, 2024

I just ran it in a VSCode window. That usually produces warnings from some stuff, but there might definitely be some details I am unaware of to how warnings are generated which affected this.

@isaacsas
Copy link
Member Author

I think that one needs to start Julia with julia --depwarn=yes to see such warnings when running code outside of the testing mode.

@TorkelE
Copy link
Member

TorkelE commented Jun 10, 2024

yeah, that sounds like something I might have missed, will probably circle back to this at some later point in time equipped with this knowledge.

@isaacsas isaacsas merged commit aa6910b into SciML:master Jun 10, 2024
7 checks passed
@isaacsas isaacsas deleted the drop_unused_parametric_types branch June 10, 2024 22:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants