Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train a general model for all temperature distribution in thermo-mechanical PDE? #482

Closed
mlin26 opened this issue Jan 13, 2022 · 5 comments

Comments

@mlin26
Copy link

mlin26 commented Jan 13, 2022

Hi Dr. Lu,

I'd like a train a model that can get the results for all possible thermal distributions. The stress equation is as follows (I omit other equations just for simplicity):

image

The 2D model size is 1 by 1. The temperature distribution dT here can be any temperature range from 0~1. The boundary conditions are the same.

My naive thought is to sample all possible dT distributions and then randomly choose some of them to train. The dT can be treated as an input variable in x and stress as u in pde(x,u).

Could you tell me if DeepXDE has related module to implement this? Thanks!

@lululxvi
Copy link
Owner

Yes, it works. Just treat dT as another coordinate like x y z.

@mlin26
Copy link
Author

mlin26 commented Jan 19, 2022

Dr. Lu, thanks for your response. It works now!

Another quick question... Could you tell me how to resample anchors data points for every given number of epochs:
dde.data.PDE(geom, pde, [bc], num_domain=0, num_boundary=0, anchors=X)

For example, I have 10000 anchors data points and I only want to use random 1000 points for every given number of epochs. I have looked at 'diffusion_1d_resample.py' example, but it might not fit my needs. Thanks!

@lululxvi
Copy link
Owner

You might check PDE.replace_with_anchors

@mlin26
Copy link
Author

mlin26 commented Feb 11, 2022

Dr. Lu, thanks so much for your help!

I have another question. When I train the model using both Adam and LBFGS, the training process would automatically stop with "Message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH".

The console message is:
...........................................................................
INFO:tensorflow:Optimization terminated with:
Message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
Objective function value: 0.000424
Number of iterations: 2
Number of functions evaluations: 13
25853 [3.09e-05, 3.64e-05, 8.37e-05, 4.43e-05, 2.25e-04, 1.39e-06, 7.31e-07, 6.60e-07, 1.08e-06, 5.64e-09, 2.08e-08] [3.09e-05, 3.64e-05, 8.37e-05, 4.43e-05, 2.25e-04, 1.39e-06, 7.31e-07, 6.60e-07, 1.08e-06, 5.64e-09, 2.08e-08] []

Best model at step 25853:
train loss: 4.24e-04
test loss: 4.24e-04
test metric: []
...............................................................................

And sometimes LBFGS would automatically stop after 15000 epochs. Could you tell me how I can let the optimizer not stop that early, for example, how to stop only after train loss < 1e-6?

My optimizer code is:
model.compile("adam", lr=0.001)
model.train(epochs=40000)
model.compile("L-BFGS")
losshistory, train_state = model.train()

Thanks!

@mlin26 mlin26 closed this as completed Feb 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants