Skip to content

Latest commit

 

History

History
122 lines (68 loc) · 4.92 KB

poisson.1d.dirichletrobin.rst

File metadata and controls

122 lines (68 loc) · 4.92 KB

Poisson equation in 1D with Dirichlet/Robin boundary conditions

Problem setup

We will solve a Poisson equation:

\Delta u = 2, \qquad x \in [-1, 1],

with the Robin boundary conditions on the right boundary

\frac{du}{dx} = u

and Dirichlet boundary conditions on the left boundary

u(-1) = 0.

The exact solution is u(x) = (x+1)^2.

Implementation

This description goes through the implementation of a solver for the above described Poisson equation step-by-step. First, the DeepXDE module is imported:

import deepxde as dde
import numpy as np

We begin by defining a computational geometry. We can use a built-in class Interval as follows

geom = dde.geometry.Interval(-1, 1)

Next, we express the PDE residual of the Poisson equation:

def pde(x, y):
    dy_xx = dde.grad.hessian(y, x)
    return dy_xx - 2

The first argument to pde is the network input, i.e., the x-coordinate. The second argument is the network output, i.e., the solution u(x), but here we use y as the name of the variable.

Next, we consider the Robin boundary condition and Dirichlet boundary condition respectively.

The location of the Robin boundary condition is defined by a simple Python function. The function should return True for those points satisfying x=1 and False otherwise (Note that because of rounding-off errors, it is often wise to use dde.utils.isclose to test whether two floating point values are equivalent). In this function, the argument x to boundary is the network input and is a d-dim vector, where d is the dimension and d=1 in this case. Then a boolean on_boundary is used as the second argument. If the point x (the first argument) is on the boundary of the geometry, in this case Robin boundary when it reaches the right endpoint of the interval, then on_boundary is True, otherwise, on_boundary is False.

def boundary_r(x, on_boundary):
    return on_boundary and dde.utils.isclose(x[0], 1)

The location of the Dirichlet boundary condition is defined in a similar way that the function should return True for those points satisfying x=-1 and False otherwise. The arguments in this function are similar to boundary_r, and the only difference is that in this case Dirichlet boundary condition is used when it reaches the left endpoint of the interval.

def boundary_l(x, on_boundary):
    return on_boundary and dde.utils.isclose(x[0], -1)

Next, we define a function to return the value of u(x) for the points x on the Dirichlet boundary. In this case, it is u(x)=0. For example, (x+1)^2 is 0 on the boundary, and thus we can also use

def func(x):
    return (x + 1) ** 2

Then, the Dirichlet boundary condition is

bc_l = dde.icbc.DirichletBC(geom, func, boundary_l)

For Robin boundary condition, rather than define a function to return the value of u(x) on the boundary, we use a lambda function that maps x and y to y, where x is the input and y is the output. Then Robin boundary condition is defined

bc_r = dde.icbc.RobinBC(geom, lambda X, y: y, boundary_r)

Now, we have specified the geometry, PDE residual, Dirichlet boundary condition and Robin boundary condition. We then define the PDE problem as

data = dde.data.PDE(geom, pde, [bc_l, bc_r], 16, 2, solution=func, num_test=100)

The number 16 is the number of training residual points sampled inside the domain, and the number 2 is the number of training points sampled on the boundary. The argument solution=func is the reference solution to compute the error of our solution, and can be ignored if we don't have a reference solution. We use 100 residual points for testing the PDE residual.

Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) and width 50:

layer_size = [1] + [50] * 3 + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.nn.FNN(layer_size, activation, initializer)

Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:

model = dde.Model(data, net)
model.compile("adam", lr=0.001, metrics=["l2 relative error"])

We also compute the L^2 relative error as a metric during training.

We then train the model for 10000 iterations:

losshistory, train_state = model.train(iterations=10000)

Complete code

.. literalinclude:: ../../../examples/pinn_forward/Poisson_Robin_1d.py
  :language: python