-
-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for compound integrals #409
Support for compound integrals #409
Conversation
Ok we now support integrals of the type There are two problems remaining in this PR: Problem 1I did some work to support NeuralPDE.jl/src/pinns_pde_solve.jl Line 323 in 2b99f6c
I don't know if that makes sense if we have two different dependent variables inside the integral. The inner representation ends up looking like this: (Expr[:((cord, var"##θ#330", phi, derivative, integral, u, p)->begin
#= /Users/gabrielbirnbaum/.julia/dev/NeuralPDE/src/pinns_pde_solve.jl:620 =#
#= /Users/gabrielbirnbaum/.julia/dev/NeuralPDE/src/pinns_pde_solve.jl:620 =#
begin
let (x,) = (cord[[1], :],)
begin
cord2 = vcat(x)
cord1 = vcat(x)
end
integral(u, cord2, phi, [1], RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#330"), :phi, :derivative, :integral, :u, :p), Main.NeuralPDE.var"#_RGF_ModTag", Main.NeuralPDE.var"#_RGF_ModTag", (0xad043f06, 0x65200796, 0x30dd12fc, 0xaa20459c, 0xb967c5c2)}(quote
#= /Users/gabrielbirnbaum/.julia/dev/NeuralPDE/src/pinns_pde_solve.jl:620 =#
#= /Users/gabrielbirnbaum/.julia/dev/NeuralPDE/src/pinns_pde_solve.jl:620 =#
begin
let (x,) = (cord[[1], :],)
begin
cord2 = vcat(x)
cord1 = vcat(x)
end
(+)(u(cord1, var"##θ#330", phi), u(cord2, var"##θ#330", phi))
end
end
end),... Any idea what I should do here? Should we support this? Problem 2I was testing the new feature with the compound integral LoadError: MethodError: no method matching real(::ChainRulesCore.Tangent{Any, NamedTuple{(:data, :uplo), Tuple{Matrix{Float64}, ChainRulesCore.ZeroTangent}}})
Closest candidates are:
real(::Complex) at complex.jl:63
real(::LinearAlgebra.Symmetric{var"#s832", S} where {var"#s832"<:Real, S<:(AbstractMatrix{var"#s832"} where var"#s832"<:var"#s832")}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/symmetric.jl:353
real(::LinearAlgebra.Symmetric) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/symmetric.jl:355 This might have something to do with Quadrature? Or another part of the ecosystem that is upstream of NeuralPDE? I would love to hear your thoughts on this @KirillZubov |
cc for visibility (feel free to ignore). @ChrisRackauckas just in case you have any insight as to why problem 2 happens, or any thoughts regarding problem 1. |
@KirillZubov bump :) your feedback would really help me move on with this. |
…b-composite-integrals
so as the inner integrant now is expression so also need broadcast for it |
MWE @parameters x
@variables u(..)
domains = [x ∈ Interval(0.0,1.00)]
chain = FastChain(FastDense(1,15,Flux.σ),FastDense(15,1))
initθ = Float64.(DiffEqFlux.initial_params(chain))
strategy = NeuralPDE.GridTraining(0.1)
indvars = [x]
depvars = [u(x)]
parameterless_type_θ = Float64
phi = NeuralPDE.get_phi(chain,parameterless_type_θ)
u_ = (cord, θ, phi)-> phi(cord, θ)
cord = [1.]
lb, ub = [0.], [1.]
θ = initθ
derivative = NeuralPDE.get_numeric_derivative()
integral = NeuralPDE.get_numeric_integral(strategy, indvars, depvars, chain, derivative)
integral_f = (cord, var"##θ#332", phi, derivative, integral, u_, p)->begin
begin
let (x,) = (cord[[1], :],)
begin
cord1 = vcat(x)
end
# (*)(cos(x), u_(cord1, var"##θ#332", phi)) #this fails
(*).(cos.(x), u_(cord1, var"##θ#332", phi))
end
end
end
integral_f(cord, θ, phi, derivative, nothing, u_, nothing)
inyf_ = (θ) -> sum(abs2,integral_f(cord, θ, phi, derivative, nothing, u_, nothing))
inyf_(θ)
Zygote.gradient(inyf_,θ) |
Oh that’s great! I thought I should have tried broadcasting. Are we comfortable merging this after the Zygote bugs are fixed? |
inf and compound expr are supported. Does it work now? #406 |
@KirillZubov do you see any typos in #406? I keep getting a I thought this had to do with compound integrals, so I removed the compound integral, then infinite interval, and then the integral altogether, and it still failed with the same error. Then I tested it in master, pulled, Could you test this on your end and see if it runs? using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux, DomainSets
import ModelingToolkit: Interval, infimum, supremum
@parameters t E
@variables i(..)
Dt = Differential(t)
DE = Differential(E)
IE = Integral(E in DomainSets.ClosedInterval(0,Inf))
beta = 2.0;
eq = Dt(i(t, E)) + exp(-beta*E)*i(t, E) - IE(i(t,E)*exp(-beta*E))*exp(-E) ~ 0
bcs = [i(0., E) ~ exp(-E)]
domains = [t ∈ Interval(0.0,100.0),
E ∈ Interval(0.0, 10.0)]
numhid=16
chain = FastChain(FastDense(1, numhid, Flux.σ), FastDense(numhid, numhid, Flux.σ), FastDense(numhid, 1))
initθ = Float64.(DiffEqFlux.initial_params(chain))
strategy_ = GridTraining(0.05)
discretization = PhysicsInformedNN(chain,
strategy_)
@named pde_system = PDESystem(eq,bcs,domains,[t, E],[i(t, E)])
prob = NeuralPDE.discretize(pde_system,discretization)
cb = function (p,l)
println("Current loss is: $l")
return false
end
res = GalacticOptim.solve(prob, ADAM(0.1); cb = cb, maxiters=100) |
If you don't have time, it is ok, I can write another example tomorrow and see if it runs. |
chain input size should be =2 |
Oh thanks! Then this works and is probably ready to be merged. Should we add a compound integral test? |
integrant with two variables work? |
Yeah it should work (assuming the problem 1 I mentioned here doesn't matter #409 (comment)). Do you have any systems in mind we could use to test? |
@killah-t-cell I will add the test myself |
Some compound integrals are working. Others show different errors ranging from problems with the inner representation to ChainRulesCore errors. Will investigate and try to fix it.