Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural network for Turing equation #11

Open
ghost opened this issue Sep 16, 2019 · 6 comments
Open

Neural network for Turing equation #11

ghost opened this issue Sep 16, 2019 · 6 comments

Comments

@ghost
Copy link

ghost commented Sep 16, 2019

Hello. I am able to run the Tutorial example with the Turing equation correctly. Now I am trying to extend that example by developing a neural network (nn) for the Turing equation, but I am getting error at the integration stage before training with any numerical data. Based on a tutorial example to create a nn, I am using the following to create a nn for the Turing equation:

# Run untrained neural net model

model_nn = models.PseudoLinearModel(equation, grid, 
                                    num_time_steps=4,  
                                    stencil_size=3, kernel_size=(3, 1), 
                                    num_layers=4, filters=32,
                                    constrained_accuracy_order=1, 
                                    learned_keys = {'A', 'B'}, 
                                    activation='relu')

tf.random.set_random_seed(0)

integrated_untrained = integrate.integrate_steps(model_nn, initial_state, time_steps)

I am getting the following error at the integrate.integrate_steps() step above.

anaconda2/envs/data-driven-pdes/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 1029, in __init__
    "input tensor must have rank %d" % (num_spatial_dims + 2))
ValueError: input tensor must have rank 4

I would appreciate your help in resolving the above error.

@JiaweiZhuang
Copy link
Collaborator

JiaweiZhuang commented Sep 16, 2019

Could you print the shapes of your input tensors by [v.shape for v in initial_state.values()] ? They should all have 4 dimensions.

@ghost
Copy link
Author

ghost commented Sep 17, 2019

Hello, @JiaweiZhuang
The ouput of [v.shape for v in initial_state.values()] is as follows:

[TensorShape([Dimension(100), Dimension(1)]),
 TensorShape([Dimension(100), Dimension(1)]),
 TensorShape([Dimension(100), Dimension(1)])]

@JiaweiZhuang
Copy link
Collaborator

So there are only 2 dimensions (x, y). Adding a batch dimension to make it (batch, x, y)

@ghost
Copy link
Author

ghost commented Sep 17, 2019

Yes, there are 2 dimensions, but effectively 1D. I am sorry, but I did not get what you mean by 'adding a batch dimension to make it (batch, x, y)'. Can you please elaborate what you mean by that?

@JiaweiZhuang
Copy link
Collaborator

The integrator "vectorizes" over multiple samples. If you just have one sample in the input data, you can add a leading "1" dimension. Something like

correct_initial_state = {k: tf.expand_dims(v, 0) for k, v in initial_state.items()}

@ghost
Copy link
Author

ghost commented Sep 17, 2019

@JiaweiZhuang : thank you for your cooperation in helping me resolve this issue. I might not be familiar with many of the low-level things used in this code from tensorflow, so please bear with me. Your suggestion to correct the initial state did get rid of the earlier error related to input tensor rank, so the integrate.integrate_steps() did work. However, the next step of wrapping the result as an xarray is giving error, possibly due to not recognizing the key x as shown below:

anaconda2/envs/data-driven-pdes/lib/python3.6/site-packages/xarray/core/dataarray.py", line 101, in _infer_coords_and_dims
    if s != sizes[d]:

KeyError: 'x'

This is the code I am using:

def wrap_as_xarray(integrated, times, x_mesh):
  dr = xarray.DataArray(integrated['A'].numpy().squeeze(), 
                        dims = ('time', 'sample', 'x'), 
                        coords = {'time': times, 'x': x_mesh.squeeze()})
  return dr


equation = TuringEquation(alpha=-0.0001, beta=10, D_A=1, D_B=30)
NX = 100
NY = 1 # 1D can be obtained by haveing a y dimension of size 1
LX = 200
grid = grids.Grid(size_x=NX, size_y=NY, step=LX/NX)
x_mesh, _ = grid.get_mesh()

initial_state = equation.random_state(grid=grid, seed=12345)
times = equation._timestep*np.arange(0, 1000, 20)
time_steps = np.arange(0, 50)
model = pde.core.models.FiniteDifferenceModel(equation, grid)
res = pde.core.integrate.integrate_steps(model=model, state=initial_state, steps=time_steps, axis=0)

# Run untrained neural net model
correct_initial_state = {k: tf.expand_dims(v, 0) for k, v in initial_state.items()}
model_nn = models.PseudoLinearModel(equation, grid, 
                                    num_time_steps=4,  
                                    stencil_size=3, kernel_size=(3, 1), 
                                    num_layers=4, filters=32,
                                    constrained_accuracy_order=1, 
                                    learned_keys = {'A', 'B'},  
                                    activation='relu')

tf.random.set_random_seed(0)

integrated_untrained = integrate.integrate_steps(model_nn, correct_initial_state, time_steps)

wrap_as_xarray(integrated_untrained, times, x_mesh).isel(time=[0, 2, 10], sample=[4, 10, 16]).plot(col='sample', hue='time', ylim=[-0.2, 0.5])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant