You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 1, 2023. It is now read-only.
We use basic ResNet-like architecture to avoid gradient vanishing, with GELU activation.
In dimension is as same as $\dim \Omega$, out dimension is set as $1$ (For real-valued equations).
Training
Randomly sample points inside $\Omega$ and $\partial \Omega$, pass through the NN, and get the output and loss value, use Adam or SGD to take gradient descent.