New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to avoid learning a trivial solution? #356
Comments
Just to clarify, is the zero trivial solution a correction solution or not? |
Yes, it is a correct solution, but non-physical for this problem. |
If it is indeed a correct solution, then NN would find it with a high chance, because zero solution is easy to find. If we try to prevent this, there are a few things you can do:
There could be other ways, but there two are some ideas you can try. |
Thank you for your assistance, @lululxvi. It definitely makes sense that the NN finds the zero solution with a high chance compared to the more complex solutions. For the above problem I am given the reference solution, therefore I can apply your first suggest by using the solution at one point in the domain as an added constraint for the NN. However, there are other cases I am looking at where the reference solution is not provided, and here I would have to consider your second suggestion. Is the negative weight a way to increase the complexity of the NN model? Regarding other alternatives, in another issue discussion #237 I came across an article https://arxiv.org/abs/2010.05075 that describes unsupervised neural networks for solving quantum eigenvalue problems. One of their examples is the above differential equation where the PyTorch package is used to build neural networks. In that case, the loss function consists of the PDE residual, but also has regularization functions which include: These terms enable the NN to avoid non-trivial eigenfunctions and eigenvalues respectively. Can these terms be added easily in the loss function in DeepXDE? Solving the differential equations with the PyTorch NN seems to consume a lot of RAM, thus, if I could have the extra regularization functions in DeepXDE I could solve the eigenvalue problem with minimal RAM usage. How can one embed these terms in the loss function? I appreciate your thoughts on this. |
My intuition is a negative weight would increase the complexity of the NN model, but I didn't try this. It is very easy to do. You can simply assume you have another "PDE" as 1/f^2 |
Hi! My question is probably superfluous because it is very identical to issue #321. However, I am still having issues with my output after trying some of the suggestions given in the answers. The particular problem I am looking at is a one-dimensional Schrodinger equation with an infinite square-well potential:
For l=1, the exact eigenfunctions and eigenvalues are:
I wanted to find out if I can compute the eigenfunction for n=1 using PINNs, by setting E = pi/2 and h=m=1. The boundary conditions for this problem are \psi(0) = \psi(1) = 0. This is a snippet of my application of your deepXDE library to this forward problem, where I tried a few different approaches (including hard constraints) to impose the boundary conditions:
It seems that the neural network prefers the trivial solution whenever the solution at the boundary is zero. When I narrow the boundary slightly to [0.1, 0.9] and enforce the values of the exact solution,
np.sqrt(2)*np.sin(N*np.pi*x),
as the Dirichlet boundary condition, I am able to get the expected solution from the neural network.Is there a way to force the neural network to give a non-trivial solution even with the Dirichlet boundary conditions as zero? I appreciate your assistance with this issue.
The text was updated successfully, but these errors were encountered: