-
-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for equations with complex numbers #821
Comments
@sathvikbhagavan Is there anything I can do to help? |
@sathvikbhagavan I am doing the same task, and I have a code that considers uncorrected. You recently helped me with the output of the result.
|
Hey @RomanSahakyan03, apologies for the late reply. Adding support for complex numbers is trivial I think and I was working on #815 first to get doc builds passing. I will add the complex number support after that. Hey @IromainI, I am not sure I understand your question 😅. Is there an error? |
@sathvikbhagavan yep, no problem. Thanks for it. I hope you'll add for both NNODE() and PhysicalInformedNN() functions |
@sathvikbhagavan I saw that you have closed your issues, can we work on complex numbers, because I have a lot of questions about this, please? |
Yes, I will start working on it. |
@sathvikbhagavan I can provide some problems I've received, if you want |
Yes, that would be great! |
Ok @sathvikbhagavan . Lets start with the system of differential equations.
here is the function from which we'll create an ODEProblem
from the analytic solution I defenitly know that the solutions for ρ₁₁, ρ₂₂ are with real numbers unlike for ρ₁₂, ρ₂₁ (this ones solutions are comlex numbers). We can be sure about that using Tsit5(). So I did. here is some code as a testing for NNODE. opt = Adam(0.01)
alg = NNODE(chain, opt, init_params = ps, strategy = StochasticTraining(2))
SciMLBase.allowscomplex(::NNODE) = true
sol = solve(problem, alg, verbose = true, maxiters = 3000, saveat = 0.01)
#--------------------------------------
# Checking part
println("This is the maximum value to the imaginary part of NNODE solution ρ₁₁: $(maximum(imag(sol[1, :])))")
println("This is the maximum value to the imaginary part of NNODE solution ρ₂₂: $(maximum(imag(sol[2, :])))")
println("This is the maximum value to the imaginary part of NNODE solution ρ₁₂: $(maximum(imag(sol[3, :])))")
println("This is the maximum value to the imaginary part of NNODE solution ρ₂₁: $(maximum(imag(sol[4, :])))")
ground_truth = solve(problem, Tsit5(), saveat = 0.01)
println("This is the maximum value to the imaginary part of ground truth solution ρ₁₁: $(maximum(imag(ground_truth[1, :])))")
println("This is the maximum value to the imaginary part of ground truth solution ρ₂₂: $(maximum(imag(ground_truth[2, :])))")
println("This is the maximum value to the imaginary part of ground truth solution ρ₁₂: $(maximum(imag(ground_truth[3, :])))")
println("This is the maximum value to the imaginary part of ground truth solution ρ₂₁: $(maximum(imag(ground_truth[4, :])))") And here is the output:
But what interesting that the NNODE somehow approximate ρ₁₁ and ρ₂₂. Here is the plot: In short
|
So, there are two problems here:
Here is my script: using NeuralPDE
using OrdinaryDiffEq
using Plots
using Lux, Random
using OptimizationOptimisers
function bloch_equations(u, p, t)
Ω, Δ, Γ = p
γ = Γ / 2
ρ₁₁, ρ₂₂, ρ₁₂, ρ₂₁ = u
d̢ρ = [im * Ω * (ρ₁₂ - ρ₂₁) + Γ * ρ₂₂;
-im * Ω * (ρ₁₂ - ρ₂₁) - Γ * ρ₂₂;
-(γ + im * Δ) * ρ₁₂ - im * Ω * (ρ₂₂ - ρ₁₁);
conj(-(γ + im * Δ) * ρ₁₂ - im * Ω * (ρ₂₂ - ρ₁₁))]
return d̢ρ
end
u0 = zeros(ComplexF64, 4)
u0[1] = 1
time_span = (0.0, 2.0)
parameters = [2.0, 0.0, 1.0]
problem = ODEProblem(bloch_equations, u0, time_span, parameters)
rng = Random.default_rng()
Random.seed!(rng, 0)
chain = Chain(Dense(1, 16, tanh; init_weight = (rng, a...) -> kaiming_normal(rng, ComplexF64, a...)) , Dense(16, 4; init_weight = (rng, a...) -> kaiming_normal(rng, ComplexF64, a...)))
ps, st = Lux.setup(rng, chain)
opt = Adam(0.01)
alg = NNODE(chain, opt, ps; strategy = GridTraining(0.01))
sol = solve(problem, alg, verbose = true, maxiters = 5000, saveat = 0.01)
ground_truth = solve(problem, Tsit5(), saveat = 0.01)
plot(sol.t, real.(reduce(hcat, sol.u)[1, :]));
plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[1, :]))
plot(sol.t, imag.(reduce(hcat, sol.u)[1, :]));
plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[1, :]))
plot(sol.t, real.(reduce(hcat, sol.u)[2, :]));
plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[2, :]))
plot(sol.t, imag.(reduce(hcat, sol.u)[2, :]));
plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[2, :]))
plot(sol.t, real.(reduce(hcat, sol.u)[3, :]));
plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[3, :]))
plot(sol.t, imag.(reduce(hcat, sol.u)[3, :]));
plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[3, :]))
plot(sol.t, real.(reduce(hcat, sol.u)[4, :]));
plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[4, :]))
plot(sol.t, imag.(reduce(hcat, sol.u)[4, :]));
plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[4, :])) u1: u2: u3: u4: You can see it learns real parts of u1 and u2 and imaginary parts of u3 and u4 well. This is just a demonstration that training with complex valued functions does work. |
@RomanSahakyan03, can I use this example in the documentation? |
Hi @sathvikbhagavan , my sincere apologies for the late response We appreciate your consideration of our equation for inclusion in the documentation of your library. For us, students, for me and my colleague, the opportunity to see our work in your library would be a great honor. This step would not only be significant for us but would also serve as a reflection of our contribution to the scientific community. Upon giving reference of this example to me and my colleague, you can surely use it in the documentation. |
@sathvikbhagavan if it's not difficult, can you explain why the NNODE is bad at finding constant values? where can I find information about how NNODE works and what kind of loss function it has? |
@sathvikbhagavan I came across a problem in another equation, when the curve of a function changes very slowly (or does not change, that is, it is a constant value), then NNODE has some problems approximating the answer. this was still visible in what we had already considered when the imaginary part was zero, but the NNODE strangely manifested itself at the same time |
@RomanSahakyan03, I would say it depends on how the training is done and what loss functions is used to train PINNs. They are always approximations. For NNODE, we use L2 loss - https://github.com/SciML/NeuralPDE.jl/blob/master/src/ode_solve.jl#L189 For the documentation, I will work on this week and add the example. Can you give a reference to the paper where it is described? |
@sathvikbhagavan Yep. Here is the reference.
|
Merged #839 |
I'm very glad that we managed to close this issue. I'm very grateful to @sathvikbhagavan and @ChrisRackauckas. Our team is very interested in using this method for calculations. If you're interested, we could organize a 30-40 minute seminar with the aim of future collaboration. |
I'd be happy to talk. |
Hi @ChrisRackauckas. Thanks for waiting. I will send you an invitation in your email |
#818 describes a problem with complex numbers. We can add it properly in NeuralPDE and also have an example explaining it.
The text was updated successfully, but these errors were encountered: