Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imposing BC using ids in Burger's equation #13

Closed
zonexo opened this issue Jan 5, 2021 · 8 comments
Closed

Imposing BC using ids in Burger's equation #13

zonexo opened this issue Jan 5, 2021 · 8 comments

Comments

@zonexo
Copy link

zonexo commented Jan 5, 2021

Hi, I have read your papers and looked into the Burgers problem.

From the paper, it mentions that to implement BC using ids, I should use:

m = sn.SciModel ([t, x], [L1 , u], "mse", "Adam")

m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=10000)

I check and found that x_data and t_data are arrays of size (100,100).

But what is the array size of ids_ic_bc and U_ic_bc?

Thanks for the clarifications.

@zonexo
Copy link
Author

zonexo commented Jan 5, 2021

Hi,

I just saw the ids example in the SciANN-SolidMechanics-BCs.py code.

So I assume that the ids_ic_bc and U_ic_bc are just 1d vectors.

The values inside ids_ic_bc will correspond to the indices inside U_ic_bc where the BC are imposed, is that so?

However, the x,y locations where the BCs are imposed are not specified. Is the code able to infer from the data where the BCs are actually imposed? Because in the SciANN-SolidMechanics-BCs.py code, data_d1 is the displacement x values. data_d1 is obtained thru dispx(input_data) and input_data has a rather special list type format containing x and y. Must it be done this way?

Thanks.

@ehsanhaghighat
Copy link
Collaborator

ehsanhaghighat commented Jan 5, 2021 via email

@zonexo
Copy link
Author

zonexo commented Jan 5, 2021

Hi, thanks for the clarifications.

I have modify the code slightly to use the ids for the BC:

`
x_data_reshape = x_data.reshape(-1, 1)
t_data_reshape = t_data.reshape(-1, 1)

#initial condition

U_ic = np.zeros((100))
ids_ic = np.zeros((100))

for i in range(100):

ids_ic[i] = i
U_ic[i] = -math.sin(math.pi*x_data[0,i])

#BC condition

U_bc = np.zeros((200))
ids_bc = np.zeros((200))

#left BC
for i in range(100):

ids_bc[i] = i*100
U_bc[i] = 0

#right BC
for i in range(100):

ids_bc[100 + i] = (i + 1)*100 - 1
U_bc[100 + i] = 0

ids_ic_bc = np.concatenate([ids_ic,ids_bc])
U_ic_bc = np.concatenate([U_ic,U_bc])

ids_ic_bc = ids_ic_bc.astype(int)

#Training
#h = m.train([x_data, t_data], 4*['zero'], learning_rate=0.002, epochs=5000, verbose=0)
h = m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=100)
`

However, I got the error:

IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

May I know what's wrong? I have also used x/t_data_reshape instead in m.train but it gave the same error.

Hope you can help.

@zonexo
Copy link
Author

zonexo commented Jan 5, 2021

Hi,

I have changed the code slightly to follow the syntax used in the SciANN-SolidMechanics-BCs.py code:

`input_data = [x_data.reshape(-1, 1), t_data.reshape(-1, 1)]

#initial condition

U_ic = np.zeros((100))

ids_ic = np.zeros((100))

for i in range(100):

ids_ic[i] = i

U_ic[i] = -math.sin(math.pi*x_data[0,i])

#BC condition

U_bc = np.zeros((200))

ids_bc = np.zeros((200))

#left BC
for i in range(100):

ids_bc[i] = i*100

U_bc[i] = 0

#right BC
for i in range(100):

ids_bc[100 + i] = (i + 1)*100 - 1

U_bc[100 + i] = 0

ids_ic_bc = np.concatenate([ids_ic,ids_bc])
ids_ic_bc = ids_ic_bc.astype(int)
U_ic_bc = np.concatenate([U_ic,U_bc])
U_ic_bc = U_ic_bc.reshape(-1, 1)

target_data = [(ids_ic_bc, U_ic_bc),
'zeros']

#Training

#h = m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=100)

h = m.train(
x_true=input_data,
y_true=target_data,
batch_size=256,epochs=100)`

So is this the correct way?

The code is no longer complaining. I ran the code to see if I can get similar results compared to using equations to implement the BCs. However, my answer is not the same, so most likely somewhere is wrong. I also increase the batch size to 2560 and the result is abit different, but still wrong. The loss is around 1e-6.

https://i.ibb.co/1swqp93/sciann-burger-bc-output.jpg

Hope you can help. Thank!

@sciann
Copy link
Collaborator

sciann commented Jan 5, 2021

The last code looks fine, and I cannot spot any problem here. Have you checked the code for the Burgers problem?

https://github.com/sciann/sciann-applications/blob/master/SciANN-BurgersEquation/SciANN-BurgersEquation.ipynb

@zonexo
Copy link
Author

zonexo commented Jan 6, 2021

Hi,

That's the code I modified. The original code uses some eqns to impose the BC so I changed to using the ids implementation. Do you have any suggestion?

Thanks.

@zonexo
Copy link
Author

zonexo commented Jan 6, 2021

I found an error in my code and it's working now. Thanks for the help. Here's the modified part:

m = sn.SciModel ([x, t], [u, L1], "mse", "Adam")

#Sampling (collocation) grid

#To train the network, we need to define a sampling (collocation) grid.

x_data, t_data = np.meshgrid(
np.linspace(-1, 1, 100),
np.linspace(0, 1, 100)
)

x_data_reshape = x_data.reshape(-1, 1)
t_data_reshape = t_data.reshape(-1, 1)

input_data = [x_data.reshape(-1, 1), t_data.reshape(-1, 1)]

#initial condition

U_ic = np.zeros((100))

ids_ic = np.zeros((100))

for i in range(100):

ids_ic[i] = i

U_ic[i] = -math.sin(math.pi*x_data[0,i])

#BC condition

U_bc = np.zeros((200))

ids_bc = np.zeros((200))

#left BC
for i in range(100):

ids_bc[i] = i*100

U_bc[i] = 0

#right BC
for i in range(100):

ids_bc[100 + i] = (i + 1)*100 - 1

U_bc[100 + i] = 0

#ids_ic_bc = np.unique(np.concatenate([ids_ic,ids_bc]))
ids_ic_bc = np.concatenate([ids_ic,ids_bc])
ids_ic_bc = ids_ic_bc.astype(int)
U_ic_bc = np.concatenate([U_ic,U_bc])
U_ic_bc = U_ic_bc.reshape(-1, 1)

target_data = [(ids_ic_bc, U_ic_bc),
'zeros']

#Training

#h = m.train([x_data,t_data],['zeros',(ids_ic_bc,U_ic_bc)],batch_size=256,epochs=100)

h = m.train(
x_true=input_data,
y_true=target_data,
learning_rate=0.002,
batch_size=2560,epochs=5000)

@sciann sciann closed this as completed Feb 5, 2021
@carlos-hernani
Copy link

why isn't this code anywhere in the examples?, still cannot find how to use the ids

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants