Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange result of multiplication #64

Closed
MartinOtter opened this issue Feb 2, 2020 · 23 comments · Fixed by #65
Closed

Strange result of multiplication #64

MartinOtter opened this issue Feb 2, 2020 · 23 comments · Fixed by #65

Comments

@MartinOtter
Copy link

MartinOtter commented Feb 2, 2020

When evaluating Measurements.jl with DifferentialEquations.jl on a simple bouncing ball example, a very strange effect occurs (performing a multiplication outside of this example gives a different result as in this example):

Here is the bouncing ball example with Measurements.jl

module Test_bouncingBall_with_measurement

using DifferentialEquations, Measurements

function f(du,u,p,t)
    du[1] = u[2]
    du[2] = -9.81
end

function condition(u,t,integrator)
    z = u[1] + 1e-12
    return z
end

const e = 0.7 ± 0.1   # coefficient of restitution

function affect_neg(integrator)
    println("Event at time = ", integrator.t, ", h = ", integrator.u[1])

    v_before = integrator.u[2]
    println("   v_before = ", v_before, ", e = ", e)
    v_after = -e*v_before
    println("   v_after  = ", v_after)

    integrator.u[2] = v_after
    auto_dt_reset!(integrator)
    set_proposed_dt!(integrator, integrator.dt)
end

cb = ContinuousCallback(condition,nothing,affect_neg! = affect_neg)

u0 = [1.0 ± 0.2, 0.0]
tspan = (0.0,2.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob,Tsit5(),saveat=0.01,callback=cb)

end

Results in the following output:

Event at time = 0.4515236409859562, h = -0.0 ± 0.2
   v_before = -4.42944691807223 ± 0.0, e = 0.7 ± 0.1
   v_after  = 3.1 ± 0.44
Event at time = 1.0836567383662918, h = -0.0 ± 0.34
   v_before = -3.1 ± 0.44, e = 0.7 ± 0.1
   v_after  = 2.170428989855373 ± 2.8e-15

Note that v_after = -e*v_before and it seems surprising that the uncertainty after this operation is nearly zero (2.8e-15). However, when perform this operation separately (outside of this example), the result is the correct one:

using Measurements

v_before = -3.1 ± 0.44
e        =  0.7 ± 0.1
v_after  = -e*v_before
@show v_after

Resulting output:

v_after = 2.17 ± 0.44

It is strange that when applying the same operation in the affect_neg callback, a wrong result is calculated. Is this a bug in Measurements.jl or did I made a mistake?

@giordano
Copy link
Member

giordano commented Feb 2, 2020

Your example doesn't run for me, I get the following error:

ERROR: LoadError: MethodError: no method matching Float64(::Measurements.Measurement{Float64})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:718
  Float64(::Int8) at float.jl:60
  ...

with a very long stacktrace pointing to line

sol = solve(prob,Tsit5(),saveat=0.01,callback=cb)

Edit:

Ok, I can run the code if I change

tspan = (0.0,2.0)

to

tspan = (0.0 ± 0.0, 2.0 ± 0.0)

@giordano
Copy link
Member

giordano commented Feb 2, 2020

I think that the differential equation example is working correctly. What you're seeing is elimination of uncertainty because Measurements.jl takes care of functional correlation, so for example:

julia> x = 0.7 ± 0.1
0.7 ± 0.1

julia> x - x
0.0 ± 0.0

julia> (0.7 ± 0.1) - (0.7 ± 0.1)
0.0 ± 0.14

julia> x / x
1.0 ± 0.0

julia> (0.7 ± 0.1) / (0.7 ± 0.1)
1.0 ± 0.2

For example, you can use Measurements.derivative to see how the derivative of v_before with respect to e changes over time:

module Test_bouncingBall_with_measurement

using DifferentialEquations, Measurements

function f(du,u,p,t)
    du[1] = u[2]
    du[2] = -9.81
end

function condition(u,t,integrator)
    z = u[1] + 1e-12
    return z
end

const restitution = 0.7 ± 0.1   # coefficient of restitution

function affect_neg(integrator)
    println("Event at time = ", integrator.t, ", h = ", integrator.u[1])

    v_before = integrator.u[2]
    @show v_before
    @show Measurements.derivative(v_before, restitution)
    v_after = -restitution*v_before
    @show v_after
    @show Measurements.derivative(v_after, restitution)

    integrator.u[2] = v_after
    auto_dt_reset!(integrator)
    set_proposed_dt!(integrator, integrator.dt)
    println()
end

cb = ContinuousCallback(condition,nothing,affect_neg! = affect_neg)

u0 = [1.0 ± 0.2, 0.0]
tspan = (0.0 ± 0.0, 2.0 ± 0.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob,Tsit5(),saveat=0.01,callback=cb)

end

Output:

Event at time = 0.45152364098595665 ± 0.0, h = -0.0 ± 0.2
v_before = -4.42944691807223 ± 0.0
Measurements.derivative(v_before, restitution) = 0.0
v_after = 3.1 ± 0.44
Measurements.derivative(v_after, restitution) = 4.42944691807223

Event at time = 1.0836567383662938 ± 0.0, h = -0.0 ± 0.34
v_before = -3.1 ± 0.44
Measurements.derivative(v_before, restitution) = 4.42944691807223
v_after = 2.17042898985538 ± 1.8e-15
Measurements.derivative(v_after, restitution) = -1.7763568394002505e-14

Event at time = 1.5261499065325295 ± 0.0, h = -0.0 ± 0.34
v_before = -2.170428989855389 ± 1.8e-15
Measurements.derivative(v_before, restitution) = -1.7763568394002505e-14
v_after = 1.52 ± 0.22
Measurements.derivative(v_after, restitution) = 2.1704289898554014

Event at time = 1.8358951242488952 ± 0.0, h = -0.0 ± 0.4
v_before = -1.52 ± 0.22
Measurements.derivative(v_before, restitution) = 2.1704289898554014
v_after = 1.0635102050291443 ± 3.3e-16
Measurements.derivative(v_after, restitution) = -3.3306690738754696e-15

Your second example doesn't give you the same result because this doesn't take into account that v_before actually has a functional relation with e, i.e. a non-null derivative.

Does this make any sense?

@MartinOtter
Copy link
Author

O.k. Thanks for your explanation. Yes this makes sense (so everything is corret, just my intuitive understanding was wrong).

@giordano
Copy link
Member

giordano commented Feb 2, 2020

Anyway, do you have an analytic solution for this problem to compare with?

@MartinOtter
Copy link
Author

MartinOtter commented Feb 2, 2020

Anyway, do you have an analytic solution for this problem to compare with?

You are right, this should be checked. Since the equations are so simple, I derived the analytic solution manually now:

module Test_bouncingBall_with_measurement_analytic

using Measurements

g = 9.81
e = 0.7 ± 0.1
h0 = 1.0 ± 0.2

#=
der(h) = v  ; h(0) = h0
der(v) = -g ; v(0) = 0

when h <= 0 then
	v = -e*v
end when

Analytic solution of differential equation with
h(t_ev) = h_ev, v(t_ev) = v_ev:

	v = -g*(t-t_ev) + v_ev
	h = -g/2*(t-t_ev)^2 + v_ev*(t-t_ev) + h_ev

First phase (t_ev = 0, v_ev=0, h_ev=h0)
	v = -g*t
	h = -g/2*t^2 + h0

First event:
	hev1 = 0 -> t_ev1 = sqrt(2*h0/g)
	            v_ev1_before = -g*t_ev1
				v_ev1_after  = -e*v_ev1_before

Second phase (t_ev = t_ev1, v_ev = v_ev1_after, h_ev = 0)
    v = -g*(t-t_ev1) + v_ev1_after
    h = -g/2*(t-t_ev1)^2 + v_ev1_after*(t-t_ev1)

Second event:
	h_ev2 = 0 = -g/2*(t-t_ev1)^2 + vev1_after*(t-tev_1)
	          = (t-t_ev1)*(-g/2*(t-t_ev1) + v_ev1_after)

	-> t_ev2-t_ev1 = 2*v_ev1_after/g

	v_ev2_before = -g*(t_ev2-t_ev1) + v_ev1_after
	             = -g*2*v_ev1_after/g + v_ev1_after
	   			 = -v_ev1_after
	v_ev2_after  = -e*v_ev2_before
=#

t_ev1        = sqrt(2*h0/g)
v_ev1_before = -g*t_ev1
v_ev1_after  = -e*v_ev1_before
t_ev2        = t_ev1 +2*v_ev1_after/g
v_ev2_before = -v_ev1_after
v_ev2_after  = -e*v_ev2_before

@show t_ev1
@show v_ev1_before
@show v_ev1_after
@show t_ev2
@show v_ev2_before
@show v_ev2_after

end

Resulting in

                               # Numeric solution:
t_ev1        = 0.452 ± 0.045   # 0.45152364098595665 ± 0.0
v_ev1_before = -4.43 ± 0.44    # -4.429446918072235 ± 0.0
v_ev1_after  = 3.1 ± 0.54      # 3.1 ± 0.44
t_ev2        = 1.08 ± 0.14     # 1.083656738366296 ± 0.0
v_ev2_before = -3.1 ± 0.54     # -3.1 ± 0.44
v_ev2_after  = 2.17 ± 0.66     # 2.1704289898553957 ± 1.3e-16

Hm. So, this is not the result computed by DifferentialEquations.jl and Measurements.jl and even the uncertainty at the first event is 20 % different to the analytic solution. I simulated with reltol=1e-8 and saveat=0.0001, and also tried another solver (BS5) but the numeric solution did not change much.

So, there is an issue.

The problem seems to be that the analytic solution computes an uncertainty in the event times (t_ev1, t_ev2), whereas the solution with DifferentialEquations.jl/Measurements.jl has no uncertainty in the event times.

@MartinOtter MartinOtter reopened this Feb 2, 2020
@giordano
Copy link
Member

giordano commented Feb 2, 2020

A couple of comments:

  • my understanding is that the time doesn't propagate the uncertainty (maybe it's just sampled at regular times?), so your times should be

    t_ev1        = Measurements.value(sqrt(2*h0/g))
    t_ev2        = Measurements.value(t_ev1 +2*v_ev1_after/g)

    With this change I get as output (extending to 4 bounces)

    t_ev1 = 0.4515236409857309
    v_ev1_before = -4.4294469180700204
    v_ev1_after = 3.1 ± 0.44
    t_ev2 = 1.083656738365754
    v_ev2_before = -3.1 ± 0.44
    v_ev2_after = 2.17 ± 0.62
    t_ev3 = 1.5261499065317703
    v_ev3_before = -2.17 ± 0.62
    v_ev3_after = 1.52 ± 0.65
    t_ev4 = 1.8358951242479817
    v_ev4_before = -1.52 ± 0.65
    v_ev4_after = 1.06 ± 0.61
    

    which agrees with

    julia> e =  0.7 ± 0.1
    0.7 ± 0.1
    
    julia> v_ev1_before = -4.42944691807223 ± 0.0
    -4.42944691807223 ± 0.0
    
    julia> v_ev1_after = -e * v_ev1_before
    3.1 ± 0.44
    
    julia> v_ev2_before = -v_ev1_after
    -3.1 ± 0.44
    
    julia> v_ev2_after = -e * v_ev2_before
    2.17 ± 0.62
    
    julia> v_ev3_before = -v_ev2_after
    -2.17 ± 0.62
    
    julia> v_ev3_after = -e * v_ev3_before
    1.52 ± 0.65
    
    julia> v_ev4_before = -v_ev3_after
    -1.52 ± 0.65
    
    julia> v_ev4_after = -e * v_ev4_before
    1.06 ± 0.61

    Does this make sense?

  • if you look at the output of my example above you can note that when the ball touches the ground, v_before has opposite sign compared to the v_after at the previous iteration, but the derivative has the same sign, instead of having the sign flipped as well. I have no idea what's going on inside the solver. As a workaround I can suggest you the following hack: set v_before as

    v_before = Measurements.result(integrator.u[2].val, -1, integrator.u[2])

    in order to flip the sign of the derivative. In this way I get the following result:

    Event at time = 0.45152364098595665 ± 0.0, h = -0.0 ± 0.2
    v_before = -4.42944691807223 ± 0.0
    v_after = 3.1 ± 0.44
    
    Event at time = 1.0836567383662938 ± 0.0, h = -0.0 ± 0.34
    v_before = -3.1 ± 0.44
    v_after = 2.17 ± 0.62
    
    Event at time = 1.5261499065325295 ± 0.0, h = -0.0 ± 0.59
    v_before = -2.17 ± 0.62
    v_after = 1.52 ± 0.65
    
    Event at time = 1.8358951242488952 ± 0.0, h = -0.0 ± 0.78
    v_before = -1.52 ± 0.65
    v_after = 1.06 ± 0.61
    

I'd tentatively say that there is nothing wrong in Measurements.jl but the solver is missing to correctly propagate the derivate for some weird reasons, which is not unreasonable since that package doesn't know anything about Measurements.jl and it's already crazy that they can mostly work together.

Maybe we can convince @ChrisRackauckas to have a look at this? 🙂

@giordano
Copy link
Member

giordano commented Feb 2, 2020

The more I look at this, the more I'm convinced that the solver is working fine, but maybe something weird is happening when the callback is invoked?

@ChrisRackauckas
Copy link

What does Measurements.jl do during the rootfind? My guess is that rootfinding is not something that plays well with Measurements? We're calling Roots.jl here.

@MartinOtter
Copy link
Author

Here is the code to compute t_ev1 with Roots.jl:

using Roots
f(t)  = -g/2*t^2 + h0
t_ev1 = find_zero(f,(0.1±0.0, 0.5±0.0))
@show t_ev1
end

The result is

t_ev1 = 0.452 ± 0.045

so exactly the result of the analytic solution. This indicates that Measurements.jl is doing it right.

A potential reason of the issue could be that the bounds t1,t2 given to find_zero(.., (t1, t2)) in DifferentialEquations are Float64 instead of Measurement{Float64}. But maybe, it is also something else.

@giordano
Copy link
Member

giordano commented Feb 2, 2020

What does Measurements.jl do during the rootfind? My guess is that rootfinding is not something that plays well with Measurements? We're calling Roots.jl here.

It looks like we have the same guess 🙂

If in the definition of zero_func at https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/2386f211a2228bb302a0145c039ffa9ca2d2b450/src/callbacks.jl#L539-L542 I add a @show Θ I see the following output:

Θ = 0.0 ± 0.0
Θ = 1.0 ± 0.0
Θ = 0.5 ± 0.0
Θ = 0.513 ± 0.085
Θ = 0.509 ± 0.088
Θ = 0.509 ± 0.088
Θ = 0.509 ± 0.088
Θ = 0.509 ± 0.088
Θ = 0.509 ± 0.088
Θ = 0.509 ± 0.088
Θ = 0.5089510673864017
Θ = 0.508951067386402

I verified these lines come from the find_root at https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/2386f211a2228bb302a0145c039ffa9ca2d2b450/src/callbacks.jl#L561. The fact that Θ is changing type is suspicious 🤔

I need to investigate more, but it looks like find_root is the culprit here.

@MartinOtter
Copy link
Author

When changing the line in https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/2386f211a2228bb302a0145c039ffa9ca2d2b450/src/callbacks.jl#L561 in the following way:

# Θ = prevfloat(find_zero(zero_func, (bottom_θ,top_Θ), Roots.AlefeldPotraShi(), atol = callback.abstol/100))
 Θ = find_zero(zero_func, (bottom_θ,top_Θ), Roots.AlefeldPotraShi(), atol = callback.abstol/100)

so removing the prevfloat call, then the numeric solution with DifferentialEquations.jl gives the result of the analytic solution:

Event at time = 0.452 ± 0.045, h = -9.99e-13 ± 3.5e-15
   v_before = -4.43 ± 0.44, e = 0.7 ± 0.1
   v_after  = 3.1 ± 0.54
Event at time = 1.08 ± 0.14, h = -9.982e-13 ± 2.1e-15
   v_before = -3.1 ± 0.54, e = 0.7 ± 0.1
   v_after  = 2.17 ± 0.66
Event at time = 1.53 ± 0.27, h = -1.0002e-12 ± 3.7e-15
   v_before = -2.17 ± 0.66, e = 0.7 ± 0.1
   v_after  = 1.52 ± 0.67
Event at time = 1.84 ± 0.39, h = -1.0e-12 ± 1.1e-14
   v_before = -1.52 ± 0.67, e = 0.7 ± 0.1
   v_after  = 1.06 ± 0.62

So, this is clearly an issue of DifferentialEquations.jl and I will close this issue here.

@ChrisRackauckas
Copy link

That means it's a Roots.jl issue since we have to have that prevfloat call, since otherwise we cannot guarantee anything about the solution crossing since Roots.jl's guerentees are incorrect (it doesn't get the right floating point number all of the time, sometimes it's off by one.

However, changing by one floating point number shouldn't cause things to go way off in any sense, so it sounds like Measurements.jl needs to specialize a dispatch on prevfloat so that way it just shifts the mean by one floating point number but leaves the variances untouched (since it's just a control flow thing).

@ChrisRackauckas
Copy link

Anyways, I would say you should reopen this because it's clearly not an issue with DifferentialEquations.jl: the fact that shifting a specific number type by 1 floating point value breaks the setup is not something the differential equation solver can really solve.

@giordano
Copy link
Member

giordano commented Feb 2, 2020

The issue is probably

Measurements.jl/src/math.jl

Lines 713 to 714 in 2164595

Base.nextfloat(a::Measurement) = nextfloat(a.val)
Base.nextfloat(a::Measurement, n::Integer) = nextfloat(a.val, n)
Who knows why those definitions? I usually tried to not silently convert to Float64, maybe I couldn't come up with a consistent definition?

@giordano giordano reopened this Feb 2, 2020
@ChrisRackauckas
Copy link

We had to do this with Dual numbers too: https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/init.jl#L46-L47

Basically, the assumption is that if someone is calling prevfloat or nextfloat, then they are likely shifting for a fairly specific control flow reason (here, just to ensure that the rootfinding solution satisfies that the ball stays in the zone). I don't think you should worry about the math at that level (since it's clearly below the accuracy), but the same structure needs to shift over. It's essentially allowing a posthoc rounding change.

@giordano
Copy link
Member

giordano commented Feb 2, 2020

We had to do this with Dual numbers too: https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/init.jl#L46-L47

Ok, you kept the same type instead of converting to wrapped type. That's probably what I should do here as well.

@ChrisRackauckas
Copy link

Think about it as the same as x + eps(x)

@MartinOtter
Copy link
Author

MartinOtter commented Feb 2, 2020

It would be nice to have a simple fix by redefining nextfloat.

I just would like to point out, that on DifferentialEquations.jl side there might be also another option (that I would prefer for another reason): A root finding function on a zero-crossing function f(t) will determine a small interval [t1,t2], such that f(t1)*f(t2) <= 0. The simplest approach is to either return "t1" as root (if the root finding is seen as a barrier) or "t2" as root, to guarantee that the zero crossing took place (this is needed by a huge number of Modelica models, which will become available in Modia). In either case, nextfloat/prevfloat is not needed. The problem is that Roots.jl seem to not return the information about the interval. I opened a ticket asking to optionally return this information.

As a temporary fix, I could also offer to provide this root-finding function, by a small adaptation of solveOneNonlinearEquation.jl of ModiaMath which implements Brents-algorithm (a direct copy of the original code from 1973 which, to my understanding, handles all the fine details of a root close to the interval boundary taking eps() etc. correclty into account), but currently only returns t2 and uses a fixed type Float64.

@ChrisRackauckas
Copy link

Roots.jl seems to have a few other issues, since the smallest interval [t1,t2] isn't actually the smallest possible interval. It stops early for some reason. We are likely going to drop Roots.jl soon for this and other reasons, since it seems to be the cause of all of our callback issues (and most of the ones you've found), and so things like SciML/DiffEqBase.jl#350 would go away by writing our own bisection.

CC @kanav99

@giordano
Copy link
Member

giordano commented Feb 2, 2020

See #65. With this now I get

Event at time = 0.452 ± 0.045, h = -9.988e-13 ± 4.5e-15
v_before = -4.43 ± 0.44
Measurements.derivative(v_before, restitution) = 0.0
v_after = 3.1 ± 0.54
Measurements.derivative(v_after, restitution) = 4.42944691807223

Event at time = 1.08 ± 0.14, h = -9.984e-13 ± 1.1e-15
v_before = -3.1 ± 0.54
Measurements.derivative(v_before, restitution) = -4.429446918072207
v_after = 2.17 ± 0.66
Measurements.derivative(v_after, restitution) = 6.201225685301088

Event at time = 1.53 ± 0.27, h = -9.9966e-13 ± 9.4e-16
v_before = -2.17 ± 0.66
Measurements.derivative(v_before, restitution) = -6.2012256853009
v_after = 1.52 ± 0.67
Measurements.derivative(v_after, restitution) = 6.511286969566019

Event at time = 1.84 ± 0.39, h = -1.0004e-12 ± 1.4e-15
v_before = -1.52 ± 0.67
Measurements.derivative(v_before, restitution) = -6.511286969566255
v_after = 1.06 ± 0.62
Measurements.derivative(v_after, restitution) = 6.0772011715951555

which should be more reasonable?

@giordano
Copy link
Member

giordano commented Feb 2, 2020

Yes, with #65 this seems to be now completely consistent with the analytic solution:

g = 9.81
e = 0.7 ± 0.1
h0 = 1.0 ± 0.2
t_ev1        = sqrt(2*h0/g)
v_ev1_before = -g*t_ev1
v_ev1_after  = -e*v_ev1_before
t_ev2        = t_ev1 +2*v_ev1_after/g
v_ev2_before = -v_ev1_after
v_ev2_after  = -e*v_ev2_before
t_ev3        = t_ev2 +2*v_ev2_after/g
v_ev3_before = -v_ev2_after
v_ev3_after  = -e*v_ev3_before
t_ev4        = t_ev3 +2*v_ev3_after/g
v_ev4_before = -v_ev3_after
v_ev4_after  = -e*v_ev4_before
@show t_ev1
@show v_ev1_before
@show v_ev1_after
@show t_ev2
@show v_ev2_before
@show v_ev2_after
@show t_ev3
@show v_ev3_before
@show v_ev3_after
@show t_ev4
@show v_ev4_before
@show v_ev4_after

output:

t_ev1 = 0.452 ± 0.045
v_ev1_before = -4.43 ± 0.44
v_ev1_after = 3.1 ± 0.54
t_ev2 = 1.08 ± 0.14
v_ev2_before = -3.1 ± 0.54
v_ev2_after = 2.17 ± 0.66
t_ev3 = 1.53 ± 0.27
v_ev3_before = -2.17 ± 0.66
v_ev3_after = 1.52 ± 0.67
t_ev4 = 1.84 ± 0.39
v_ev4_before = -1.52 ± 0.67
v_ev4_after = 1.06 ± 0.62

Also the time in the DiffEq code now has the correct uncertainty.

Wow, I'm always afraid of silent conversion to Float64, this was very bad.

@MartinOtter
Copy link
Author

Yes!!! Very good!!!

@MartinOtter
Copy link
Author

MartinOtter commented Feb 4, 2020

As a final remark, here is the complete test model including the analytic solution and plots (everything looks fine with Measurements version 2.2.0):

module Test_bouncingBall_with_measurement

using DifferentialEquations, Measurements, PyPlot

const g   = 9.81       # Gravity constant
const h0  = 1.0 ± 0.2  # Initial height
const cof = 0.7 ± 0.1  # Coefficient of restitution


function f(du,u,p,t)
    du[1] = u[2]
    du[2] = -g
end

function condition(u,t,integrator)
    z = u[1] + 1e-12
    return z
end


function affect_neg(integrator)
    println("Event at time = ", integrator.t, ", h = ", integrator.u[1])

    v_before = integrator.u[2]
    println("   v_before = ", v_before)
    v_after = -cof*v_before
    println("   v_after  = ", v_after)
    integrator.u[2] = v_after

    auto_dt_reset!(integrator)
    set_proposed_dt!(integrator, integrator.dt)
end

cb = ContinuousCallback(condition,nothing,affect_neg! = affect_neg)

u0 = [h0, 0.0]
tspan = (0.0±0.0, 2.0±0.0)
prob = ODEProblem(f,u0,tspan)
sol  = solve(prob,Tsit5(),tolerance=1e-6,saveat=0.01,callback=cb)

println("\n... Analytic solution upto second bounce")
t_ev1        = sqrt(2*h0/g)
h_ev1        = -g/2*t_ev1^2 + h0
v_ev1_before = -g*t_ev1
v_ev1_after  = -cof*v_ev1_before

t_ev2        = t_ev1 +2*v_ev1_after/g
h_ev2        = -g/2*(t_ev2-t_ev1)^2 + v_ev1_after*(t_ev2-t_ev1)
v_ev2_before = -v_ev1_after
v_ev2_after  = -cof*v_ev2_before

println("Event at time = ", t_ev1, ", h = ", h_ev1)
println("   v_before = ", v_ev1_before)
println("   v_after  = ", v_ev1_after)

println("Event at time = ", t_ev2, ", h = ", h_ev2)
println("   v_before = ", v_ev2_before)
println("   v_after  = ", v_ev2_after)

# Plot result

tt = Measurements.value.(sol.t)

figure(1)
clf()

h = getindex.(sol.u, 1)
hm = Measurements.value.(h)
hu = Measurements.uncertainty.(h)
hmax = hm + hu
hmin = hm - hu
plot(tt, hmax, label="\$h_{max} in [m]\$")
plot(tt, hmin, label="\$h_{min} in [m]\$")
plot(tt, hm, label="\$h_{mean} in [m]\$")
grid(true)
legend()
title("Bouncing ball with Measurement{Float64}")


figure(2)
clf()

v = getindex.(sol.u, 2)
vm = Measurements.value.(v)
vu = Measurements.uncertainty.(v)
vmax = vm + vu
vmin = vm - vu
plot(tt, vmax, label="\$v_{max} in [m/s]\$")
plot(tt, vmin, label="\$v_{min} in [m/s]\$")
plot(tt, vm, label="\$v_{mean} in [m/s]\$")
grid(true)
legend()
title("Bouncing ball with Measurement{Float64}")

figure(3)
clf()
plot(tt, Measurements.uncertainty.(sol.t), label="uncertainty(time) in [s]")
grid(true)
legend()
title("Bouncing ball with Measurement{Float64}")
end

Results in the following output (so numeric and analytic solution match very good):

Event at time = 0.452 ± 0.045, h = -9.988e-13 ± 4.5e-15
   v_before = -4.43 ± 0.44
   v_after  = 3.1 ± 0.54
Event at time = 1.08 ± 0.14, h = -9.984e-13 ± 1.1e-15
   v_before = -3.1 ± 0.54
   v_after  = 2.17 ± 0.66
Event at time = 1.53 ± 0.27, h = -9.9966e-13 ± 9.4e-16
   v_before = -2.17 ± 0.66
   v_after  = 1.52 ± 0.67
Event at time = 1.84 ± 0.39, h = -1.0004e-12 ± 1.4e-15
   v_before = -1.52 ± 0.67
   v_after  = 1.06 ± 0.62

... Analytic solution upto second bounce
Event at time = 0.452 ± 0.045, h = 0.0 ± 0.0
   v_before = -4.43 ± 0.44
   v_after  = 3.1 ± 0.54
Event at time = 1.08 ± 0.14, h = 0.0 ± 0.0
   v_before = -3.1 ± 0.54
   v_after  = 2.17 ± 0.66

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants