Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

diffusion_1d_exactBC example #91

Closed
kpratik41 opened this issue Jul 19, 2020 · 2 comments
Closed

diffusion_1d_exactBC example #91

kpratik41 opened this issue Jul 19, 2020 · 2 comments

Comments

@kpratik41
Copy link

kpratik41 commented Jul 19, 2020

Hi Lulu,

Do you have a reference for the problem that you are trying to solve in Diffusion 1d exact BC example?

  1. The func function is specifying the expected output (target) of the Neural Network which is also the exact solution of PDE. Why are we specifying that? Using func we are generating self.train_y. How is this being used in calculating the losses? I don't see targets variable being used in the losses function in pde.py file.

  2. What is the use of net.output_modify? (I haven't pulled your updated code yet) From what I understood you are doing a forward pass on the network and then you are modifying the outputs using this.

  3. error = bc.error(self.train_x, model.net.inputs, outputs, beg, end) Can you also clarify what is the difference between self.train_x and model.net.inputs. According to me self.train_x are the training points that get generated by the code while model.net.inputs is the tensor version of it which helps in calculating gradients.

  4. In the Euler Beam example, I faced a similar issues as mentioned in point 1. In that problem there are 10 domain points and 2 boundary points specified. The code generates 12 points total and out of them it filtes points for boundary conditions and finds one for each of them so in total it has 16 points in self.train_x. The first 4 correspond to each of the BC's and the rest 12 are for the pde. The self.train_y gets generated using func provided in dde.data.PDE . How is this y_train being used at all in calculating errors?

I am actually dealing with a problem where I want to specify exact values of solution using a csv file and also generate more points in the domain to satisfy the BC. I felt that understanding this example first will help me move in the right direction. Thanks.

@lululxvi
Copy link
Owner

lululxvi commented Jul 21, 2020

For BC:

  • The problem in diffusion_1d_exactBC.py is just 1D diffusion problem.
  • You can compare diffusion_1d.py with diffusion_1d_exactBC.py. They solve exact the same problem, but diffusion_1d.py uses a soft constraint for the BC loss, and diffusion_1d_exactBC.py enforces the BC exactly. Read the second paragraph on Page 6 of DeepXDE paper

For exact solution:

  • I specify the exact/reference solution (stored in self.train_y), so that I can compute the L2 relative error of the network solution during the training. It has nothing to do with training. It is OK not to provide the reference solution, in which case self.train_y = None, and you cannot compute the error during training.
  • self.train_x is a numpy array of the network input, and model.net.inputs is a TensoFlow tensor. So it is more convenient for the user to use both numpy and tensorflow to define the BC.
  • If you want to specify the exact solution from a table, you can use the interpolation in scipy such that it can handle any input, see Coefficients of PDE as points in grid #79
  • Another choice would be: Don't use the exact solution during training. When the network is trained, then predict the solution at any points you want (see "How can I use a trained model for new predictions?" at FAQ), and compute the loss.

@kpratik41
Copy link
Author

Thanks Lu Lu for promptly replying always. I am able to understand it better now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants