Skip to content

remihndz/Neural-Network-ODE-Solver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 

Repository files navigation

Neural Network Solver for Differential Equations

About

Implementation of the solver of differential equations presented by Lagaris et. al. (see original paper) and investigation of the method for different parameter values.

Presentation of the method

This method has the advantages of:

  • providing a meshless solution to ordinary and partial differential equations;
  • the solution (and its derivatives) can easily be used for post-processing.
  • the use of simple feedforward networks, even single layered with few parameters provides accurate solutions;
  • the computationnally intensive optimization can be fastened greatly by providing the (exact) derivatives of the loss function or by making use of algorithm such as backpropagation.

In order to solve the general differential equation:

we first define an appropriate (regarding the boundary conditions) trial function:

where A is taken to satisfy the boundary conditions exactly and B is zero on the boundary. The function Φ is the output of a neural network.
For a discrete set of points S in Ω, the network's parameters σ are trained to minimize:

Solving the minimization problem can be problematic because of the presence of local minima. For more complicated problem or better accuracy, a random walk on the parameter space for the initial guess may be useful to prevent falling in such pitfall.

Investigation of the (hyper-)parameters


In abscence of theoretical results for the method, we want to empirically investigate if and how the trial function's accuracy converges to the analytical solution, ψ. For this purpose, we set a test problem of which we know the solution and compare it with the numerical solution, φ. To evaluate this, we use both the L2 norm of ψ-φ and how well φ satisfies the equation, namely J(θ).

Letting q be the size of the (single) hidden layer in the network and n the number of points in the training set S, we suggested the following convergence rates of q-0.5

and e-c√n

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages