Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different test values on julia v1.9.3 #1617

Closed
JoshuaLampert opened this issue Aug 29, 2023 · 10 comments · Fixed by #1625
Closed

Different test values on julia v1.9.3 #1617

JoshuaLampert opened this issue Aug 29, 2023 · 10 comments · Fixed by #1625
Labels
bug Something isn't working testing

Comments

@JoshuaLampert
Copy link
Member

Recently, CI fails due to different test values in one elixir with a 1D TreeMesh, see e.g. here. I was able to reproduce the error locally on julia v1.9.3 (and v1.10.0-beta2), but not on julia v1.9.2.

@ranocha ranocha added bug Something isn't working testing labels Aug 29, 2023
@ranocha
Copy link
Member

ranocha commented Aug 29, 2023

@andrewwinters5000 The failure occurs for one of the wet-dry elixirs. Could you maybe take a look at it and increase the test tolerance if required?

@andrewwinters5000
Copy link
Member

I can try and carve out time soon to have a look.

@ranocha
Copy link
Member

ranocha commented Aug 29, 2023

Thanks a lot!

@andrewwinters5000
Copy link
Member

Investigating the results of elixir_shallowwater_beach.jl using julia 1.9.0 versus julia 1.9.3 the error values for all the quantities start changing around the 10 decimal place. Because the error in the bottom topography is always quite small it seems to be more sensitive to whatever changes have happened "under-the-hood" in the new julia release. Running to tspan=(0.0,0.05) as done in the beach test we get:

# For julia 1.9.0
(l2 = [0.17979210479598923, 1.2377495706611434, 6.289818963361573e-8], linf = [0.845938394800688, 3.3740800777086575, 4.4541473087633676e-7])
# For julia 1.9.3
(l2 = [0.17979210239962704, 1.2377495649103047, 6.289758492103817e-8], linf = [0.8459383936345142, 3.374080078500849, 4.451690074347425e-7])

The fluid quantities pass but the bottom topography errors fail as, e.g.,

abs(l2_julia190[3] - l2_julia193[3]) = 6.047125775517298e-13

which is larger than the default atol=1.1102230246251565e-13 used in the testing.

We could adjust the tolerances, but I think a better way is to change this elixir to set adaptive=false in the SSPRK43 time integration scheme and update the reference values accordingly. My reasoning is two-fold. 1) The adaptive time stepping can cause the method to "lose" well-balancedness at least up to the (default) tolerances set in the adaptive integrator. This has not given us any positivity issues yet, but it is something to keep in mind. 2) When I run elixir_shallowwater_beach.jl to the final time I get a different number of accepted and total time steps between the two julia versions but using identical Trixi and OrdinaryDiffEq versions.

# For julia 1.9.0
Final time: 10.0  Time steps: 15078 (accepted), 20324 (total)
# For julia 1.9.3
Final time: 10.0 Time steps: 15073 (accepted), 20387 (total)

In contrast, when I run elixir_shallowwater_parabolic_bowl.jl that currently uses adaptive time stepping, the number of accepted and total time steps for both julia versions are identical. However, the parabolic bowl is a slightly simpler test case than the beach run-up, which might explain the differences.

@ranocha
Copy link
Member

ranocha commented Aug 30, 2023

This is weird... Is there any discontinuous behavior involved in this setup that could explain these differences?

@andrewwinters5000
Copy link
Member

andrewwinters5000 commented Aug 30, 2023

This is weird... Is there any discontinuous behavior involved in this setup that could explain these differences?

The velocity is discontinuous but the water height and bottom are not.

Update: Here is a picture of the initial conditions where H = h+b and the other variables are the primitives.
Screenshot 2023-08-30 at 22 03 34

@ranocha
Copy link
Member

ranocha commented Aug 31, 2023

What I meant was more some discontinuity in the discretization. Since this is a test case with drying, we do use some thresholds etc. that may explain this

@ranocha
Copy link
Member

ranocha commented Aug 31, 2023

I think you could just use a slightly larger absolute tolerance in this case?

@andrewwinters5000
Copy link
Member

I think you could just use a slightly larger absolute tolerance in this case?

That would be okay from my side, although it is interesting that the error based time stepping changes so noticeably.

@ranocha
Copy link
Member

ranocha commented Sep 1, 2023

Yes, definitely...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working testing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants