-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange result of multiplication #64
Comments
Your example doesn't run for me, I get the following error:
with a very long stacktrace pointing to line sol = solve(prob,Tsit5(),saveat=0.01,callback=cb) Edit: Ok, I can run the code if I change tspan = (0.0,2.0) to tspan = (0.0 ± 0.0, 2.0 ± 0.0) |
I think that the differential equation example is working correctly. What you're seeing is elimination of uncertainty because julia> x = 0.7 ± 0.1
0.7 ± 0.1
julia> x - x
0.0 ± 0.0
julia> (0.7 ± 0.1) - (0.7 ± 0.1)
0.0 ± 0.14
julia> x / x
1.0 ± 0.0
julia> (0.7 ± 0.1) / (0.7 ± 0.1)
1.0 ± 0.2 For example, you can use module Test_bouncingBall_with_measurement
using DifferentialEquations, Measurements
function f(du,u,p,t)
du[1] = u[2]
du[2] = -9.81
end
function condition(u,t,integrator)
z = u[1] + 1e-12
return z
end
const restitution = 0.7 ± 0.1 # coefficient of restitution
function affect_neg(integrator)
println("Event at time = ", integrator.t, ", h = ", integrator.u[1])
v_before = integrator.u[2]
@show v_before
@show Measurements.derivative(v_before, restitution)
v_after = -restitution*v_before
@show v_after
@show Measurements.derivative(v_after, restitution)
integrator.u[2] = v_after
auto_dt_reset!(integrator)
set_proposed_dt!(integrator, integrator.dt)
println()
end
cb = ContinuousCallback(condition,nothing,affect_neg! = affect_neg)
u0 = [1.0 ± 0.2, 0.0]
tspan = (0.0 ± 0.0, 2.0 ± 0.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob,Tsit5(),saveat=0.01,callback=cb)
end Output:
Your second example doesn't give you the same result because this doesn't take into account that Does this make any sense? |
O.k. Thanks for your explanation. Yes this makes sense (so everything is corret, just my intuitive understanding was wrong). |
Anyway, do you have an analytic solution for this problem to compare with? |
You are right, this should be checked. Since the equations are so simple, I derived the analytic solution manually now: module Test_bouncingBall_with_measurement_analytic
using Measurements
g = 9.81
e = 0.7 ± 0.1
h0 = 1.0 ± 0.2
#=
der(h) = v ; h(0) = h0
der(v) = -g ; v(0) = 0
when h <= 0 then
v = -e*v
end when
Analytic solution of differential equation with
h(t_ev) = h_ev, v(t_ev) = v_ev:
v = -g*(t-t_ev) + v_ev
h = -g/2*(t-t_ev)^2 + v_ev*(t-t_ev) + h_ev
First phase (t_ev = 0, v_ev=0, h_ev=h0)
v = -g*t
h = -g/2*t^2 + h0
First event:
hev1 = 0 -> t_ev1 = sqrt(2*h0/g)
v_ev1_before = -g*t_ev1
v_ev1_after = -e*v_ev1_before
Second phase (t_ev = t_ev1, v_ev = v_ev1_after, h_ev = 0)
v = -g*(t-t_ev1) + v_ev1_after
h = -g/2*(t-t_ev1)^2 + v_ev1_after*(t-t_ev1)
Second event:
h_ev2 = 0 = -g/2*(t-t_ev1)^2 + vev1_after*(t-tev_1)
= (t-t_ev1)*(-g/2*(t-t_ev1) + v_ev1_after)
-> t_ev2-t_ev1 = 2*v_ev1_after/g
v_ev2_before = -g*(t_ev2-t_ev1) + v_ev1_after
= -g*2*v_ev1_after/g + v_ev1_after
= -v_ev1_after
v_ev2_after = -e*v_ev2_before
=#
t_ev1 = sqrt(2*h0/g)
v_ev1_before = -g*t_ev1
v_ev1_after = -e*v_ev1_before
t_ev2 = t_ev1 +2*v_ev1_after/g
v_ev2_before = -v_ev1_after
v_ev2_after = -e*v_ev2_before
@show t_ev1
@show v_ev1_before
@show v_ev1_after
@show t_ev2
@show v_ev2_before
@show v_ev2_after
end Resulting in
Hm. So, this is not the result computed by DifferentialEquations.jl and Measurements.jl and even the uncertainty at the first event is 20 % different to the analytic solution. I simulated with reltol=1e-8 and saveat=0.0001, and also tried another solver (BS5) but the numeric solution did not change much. So, there is an issue. The problem seems to be that the analytic solution computes an uncertainty in the event times (t_ev1, t_ev2), whereas the solution with DifferentialEquations.jl/Measurements.jl has no uncertainty in the event times. |
A couple of comments:
I'd tentatively say that there is nothing wrong in Maybe we can convince @ChrisRackauckas to have a look at this? 🙂 |
The more I look at this, the more I'm convinced that the solver is working fine, but maybe something weird is happening when the callback is invoked? |
What does Measurements.jl do during the rootfind? My guess is that rootfinding is not something that plays well with Measurements? We're calling Roots.jl here. |
Here is the code to compute t_ev1 with Roots.jl:
The result is
so exactly the result of the analytic solution. This indicates that Measurements.jl is doing it right. A potential reason of the issue could be that the bounds t1,t2 given to find_zero(.., (t1, t2)) in DifferentialEquations are Float64 instead of Measurement{Float64}. But maybe, it is also something else. |
It looks like we have the same guess 🙂 If in the definition of
I verified these lines come from the I need to investigate more, but it looks like |
When changing the line in https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/2386f211a2228bb302a0145c039ffa9ca2d2b450/src/callbacks.jl#L561 in the following way:
so removing the
So, this is clearly an issue of DifferentialEquations.jl and I will close this issue here. |
That means it's a Roots.jl issue since we have to have that However, changing by one floating point number shouldn't cause things to go way off in any sense, so it sounds like Measurements.jl needs to specialize a dispatch on |
Anyways, I would say you should reopen this because it's clearly not an issue with DifferentialEquations.jl: the fact that shifting a specific number type by 1 floating point value breaks the setup is not something the differential equation solver can really solve. |
The issue is probably Lines 713 to 714 in 2164595
Float64 , maybe I couldn't come up with a consistent definition?
|
We had to do this with Dual numbers too: https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/init.jl#L46-L47 Basically, the assumption is that if someone is calling prevfloat or nextfloat, then they are likely shifting for a fairly specific control flow reason (here, just to ensure that the rootfinding solution satisfies that the ball stays in the zone). I don't think you should worry about the math at that level (since it's clearly below the accuracy), but the same structure needs to shift over. It's essentially allowing a posthoc rounding change. |
Ok, you kept the same type instead of converting to wrapped type. That's probably what I should do here as well. |
Think about it as the same as |
It would be nice to have a simple fix by redefining nextfloat. I just would like to point out, that on DifferentialEquations.jl side there might be also another option (that I would prefer for another reason): A root finding function on a zero-crossing function f(t) will determine a small interval [t1,t2], such that f(t1)*f(t2) <= 0. The simplest approach is to either return "t1" as root (if the root finding is seen as a barrier) or "t2" as root, to guarantee that the zero crossing took place (this is needed by a huge number of Modelica models, which will become available in Modia). In either case, nextfloat/prevfloat is not needed. The problem is that Roots.jl seem to not return the information about the interval. I opened a ticket asking to optionally return this information. As a temporary fix, I could also offer to provide this root-finding function, by a small adaptation of solveOneNonlinearEquation.jl of ModiaMath which implements Brents-algorithm (a direct copy of the original code from 1973 which, to my understanding, handles all the fine details of a root close to the interval boundary taking eps() etc. correclty into account), but currently only returns t2 and uses a fixed type Float64. |
Roots.jl seems to have a few other issues, since the smallest interval CC @kanav99 |
See #65. With this now I get
which should be more reasonable? |
Yes, with #65 this seems to be now completely consistent with the analytic solution: g = 9.81
e = 0.7 ± 0.1
h0 = 1.0 ± 0.2
t_ev1 = sqrt(2*h0/g)
v_ev1_before = -g*t_ev1
v_ev1_after = -e*v_ev1_before
t_ev2 = t_ev1 +2*v_ev1_after/g
v_ev2_before = -v_ev1_after
v_ev2_after = -e*v_ev2_before
t_ev3 = t_ev2 +2*v_ev2_after/g
v_ev3_before = -v_ev2_after
v_ev3_after = -e*v_ev3_before
t_ev4 = t_ev3 +2*v_ev3_after/g
v_ev4_before = -v_ev3_after
v_ev4_after = -e*v_ev4_before
@show t_ev1
@show v_ev1_before
@show v_ev1_after
@show t_ev2
@show v_ev2_before
@show v_ev2_after
@show t_ev3
@show v_ev3_before
@show v_ev3_after
@show t_ev4
@show v_ev4_before
@show v_ev4_after output:
Also the time in the DiffEq code now has the correct uncertainty. Wow, I'm always afraid of silent conversion to |
Yes!!! Very good!!! |
As a final remark, here is the complete test model including the analytic solution and plots (everything looks fine with Measurements version 2.2.0):
Results in the following output (so numeric and analytic solution match very good):
|
When evaluating Measurements.jl with DifferentialEquations.jl on a simple bouncing ball example, a very strange effect occurs (performing a multiplication outside of this example gives a different result as in this example):
Here is the bouncing ball example with Measurements.jl
Results in the following output:
Note that
v_after = -e*v_before
and it seems surprising that the uncertainty after this operation is nearly zero (2.8e-15). However, when perform this operation separately (outside of this example), the result is the correct one:Resulting output:
It is strange that when applying the same operation in the affect_neg callback, a wrong result is calculated. Is this a bug in Measurements.jl or did I made a mistake?
The text was updated successfully, but these errors were encountered: