-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linear error propagation theory #41
Comments
Also, may I ask why you chose this approach and dropped covariance terms? It doesn't seem to be difficult to include those. Another somewhat unrelated question is why you're using Calculus.jl and not, say, ForwardDiff.jl to calculate derivatives. I don't know Calculus.jl at all, but I suspect Calculus.jl also works for non-Julia functions? I'd appreciate it if you could elaborate on this choice. |
Because by using only independent measurements there is no need of covariance terms at all. However, adding support for variables related by a covariance matrix is something that I'd like to have. There is
Just to make it clear: Also, at the beginning of development of |
To make this statement more concrete, let's define a macro similar to using ForwardDiff, Measurements
macro fd_uncertain(expr::Expr)
f = esc(expr.args[1]) # Function name
n = length(expr.args) - 1
if n == 1
a = esc(expr.args[2]) # Argument, of Measurement type
return quote
x = measurement($a)
Measurements.result($f(x.val), ForwardDiff.derivative($f, x.val), x)
end
else
a = expr.args[2:end] # Arguments, as an array of expressions
args = :([]) # Build up array of arguments
[push!(args.args, :(measurement($(esc(a[i]))))) for i=1:n] # Fill the array
argsval =:([]) # Build up the array of values of arguments
[push!(argsval.args, :($(args.args[i]).val)) for i=1:n] # Fill the array
return :( Measurements.result($f($argsval...),
ForwardDiff.gradient(x -> $f(x...), $argsval),
$args) )
end
end Use it with a function supported by julia> sin(2 ± 0.13)
0.909 ± 0.054
julia> @uncertain sin(2 ± 0.13)
0.909 ± 0.054
julia> @fd_uncertain sin(2 ± 0.13)
0.909 ± 0.054 but this isn't very interesting since julia> using SpecialFunctions
julia> zeta(2 ± 0.13)
ERROR: MethodError: no method matching zeta(::Measurement{Float64})
Closest candidates are:
zeta(::Number) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:512
zeta(::Number, ::Number) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:532
zeta(::BigFloat) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:450
...
Stacktrace:
[1] zeta(::Measurement{Float64}) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:513
[2] top-level scope at none:0
julia> @uncertain zeta(2 ± 0.13)
1.64 ± 0.12
julia> @fd_uncertain zeta(2 ± 0.13)
ERROR: MethodError: no method matching zeta(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(zeta),Float64},Float64,1})
Closest candidates are:
zeta(::Number) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:512
zeta(::Number, ::Number) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:532
zeta(::BigFloat) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:450
...
Stacktrace:
[1] zeta(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(zeta),Float64},Float64,1}) at /home/mose/.julia/dev/SpecialFunctions/src/gamma.jl:513
[2] derivative(::typeof(zeta), ::Float64) at /home/mose/.julia/packages/ForwardDiff/N0wMF/src/derivative.jl:14
[3] top-level scope at /tmp/foo.jl:8 Or a function calling a C/Fortran library (example from the documentation): julia> using Cuba
julia> cubaerf(x::Real) =
2x/sqrt(pi)*cuhre((t, f) -> f[1] = exp(-abs2(t[1]*x)))[1][1]
cubaerf (generic function with 1 method)
julia> @uncertain cubaerf(0.5 ± 0.01)
0.5205 ± 0.0088
julia> @fd_uncertain cubaerf(0.5 ± 0.01)
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:194
Float64(::T<:Number) where T<:Number at boot.jl:741
Float64(::Int8) at float.jl:60
...
Stacktrace:
[1] convert(::Type{Float64}, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1}) at ./number.jl:7
[2] setindex!(::Array{Float64,1}, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1}, ::Int64) at ./array.jl:767
[3] (::getfield(Main, Symbol("##31#32")){ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1}})(::Array{Float64,1}, ::Array{Float64,1}) at ./REPL[30]:2
[4] generic_integrand!(::Int32, ::Ptr{Float64}, ::Int32, ::Ptr{Float64}, ::getfield(Main, Symbol("##31#32")){ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1}}) at /home/mose/.julia/dev/Cuba/src/Cuba.jl:98
[5] dointegrate! at /home/mose/.julia/dev/Cuba/src/cuhre.jl:40 [inlined]
[6] dointegrate at /home/mose/.julia/dev/Cuba/src/Cuba.jl:202 [inlined]
[7] #cuhre#1(::Int64, ::Float64, ::Float64, ::Int64, ::Int64, ::Int64, ::Int64, ::String, ::Ptr{Nothing}, ::Missing, ::Missing, ::typeof(cuhre), ::getfield(Main, Symbol("##31#32")){ForwardDiff.Dual{ForwardDiff.Tag{typeof(cubaerf),Float64},Float64,1}}, ::Int64, ::Int64) at /home/mose/.julia/dev/Cuba/src/cuhre.jl:97
[8] cuhre at /home/mose/.julia/dev/Cuba/src/cuhre.jl:92 [inlined] (repeats 2 times)
[9] cubaerf at ./REPL[30]:1 [inlined]
[10] derivative(::typeof(cubaerf), ::Float64) at /home/mose/.julia/packages/ForwardDiff/N0wMF/src/derivative.jl:14
[11] top-level scope at /tmp/foo.jl:8 |
I'm actually currently rewriting Covar.jl using ChainRules.jl, eventually with support even for non-holomorphic complex functions. The new implementation would be using reverse-mode AD and would pretty naturally allow for uncorrelated, as well as linearly correlated, measurements. Right now, I'm still waiting for some bugs in ChainRules.jl to be fixed, but it will be interesting to see how it works out, eventually. |
From the readme (and docs):
It looks to me that this package isn't employing "full linear propagation theory" but in fact always assumes measurements to be independent (please correct me if I'm mistaken). Specifically, if I have two
Measurement
sa
andb
and am interested in the uncertainty off(a,b)
you seem to assume thata
andb
are independent, in which case covariances vanish and the linear error propagation is simplified. See this example on wikipedia.I am aware of the explanations of "functional correlation" and the method used in the docs and the paper. I nonetheless suggest to mention this extra assumption in the sentence above, as it might be a bit misleading otherwise.
(I was suprised to see a difference between a manually implemented (full) linear propagation theory and Measurements.jl for the simple function
f(x,y) = x + y
, see carstenbauer/BinningAnalysis.jl#46 (comment)).The text was updated successfully, but these errors were encountered: