Skip to content

Latest commit

 

History

History
90 lines (62 loc) · 5.34 KB

use.rst

File metadata and controls

90 lines (62 loc) · 5.34 KB

pyrenn

Use a trained neural network

Once a neural network is trained successfully, it can be used to calculate the neural network outputs for new (different from the training data) input data. The input data for using the neural network has the same structure than for training. The neural network calculates the output data $\hat{\widetilde{Y}}$ which has the same structure than the training output data . Any arbitrary number of data samples Q can be used, resulting in the same amount oft output samples.

$$\begin{aligned} \widetilde{P} = \begin{bmatrix} \underline{p}[1] & \underline{p}[2] & ... &\underline{p}[q] & ... &\underline{p}[Q] \end{bmatrix}\\\ \hat{\widetilde{Y}} = \begin{bmatrix} \underline{\hat{y}}[1] & \underline{\hat{y}}[2] & ... &\underline{\hat{y}}[q] & ... &\underline{\hat{y}}[Q] \end{bmatrix}\\\ \end{aligned}$$

Using previous inputs and outputs for recurrent networks or networks with delayed inputs

Neural networks with delayed recurrent connections between their output and the input layer (green) and networks with delayed inputs d > 0 (blue) need outputs or inputs of previous timesteps t − d to calculate the output for timestep t. When the neural netwoork is used applying the input data , for the first time-step(s) these previous inputs and outputs are not known yet. pyrenn sets all unknown previous inputs and outputs to zero, which will probably lead to an error in the first time-steps.

But pyrenn allows to pass previous inputs $\widetilde{P0}$ and previous outputs $\widetilde{Y0}$ to the neural network, if they are known by the user. $\widetilde{P0}$ and $\widetilde{Y0}$ have the same structure than and . Both must have the same number of previous data samples Q0, even if one of them is irrelevant for the neural network. The neural network output $\underline{\hat{y}}[q]$ at time q is then calculated using this previous inputs and outputs at time q − d, where $\underline{{p}}[0]$ and $\underline{\hat{y}}[0]$ is the last element of $\widetilde{P0}$ and $\widetilde{Y0}$, respectively.

$$\begin{aligned} \begin{gather} &\widetilde{P0} &\widetilde{P}\\\ &\overbrace{\begin{bmatrix} \underline{p}[Q0-1] & ... & \underline{p}[-1] & \underline{p}[0] \end{bmatrix}} \; &\overbrace{\begin{bmatrix} \underline{p}[1] & \underline{p}[2] & ... &\underline{p}[q] & ... &\underline{p}[Q] \end{bmatrix}}\\\\\ &\underbrace{\begin{bmatrix} \underline{\hat{y}}[Q0-1] & ... &\underline{\hat{y}}[-1] &\underline{\hat{y}}[0] \end{bmatrix}} \; &\underbrace{ \underline{\hat{y}}[q] = f(\underline{p}[q],\underline{p}[q-d],\underline{\hat{y}}[q-d]) }\\\ &\widetilde{Y0} &\widetilde{Y} \\\ \end{gather} \end{aligned}$$

Setting previous values for the outputs of hidden layers (red connections) is not possible. If a neural network has internal recurrent connections, the previous outputs of hidden layers are set to zero, when not known yet.

Calculate neural network outputs with NNOut()

Python

Matlab