Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine fireTS library for NARX network with Levenberg Marquardt #5

Closed
TobiasEl opened this issue May 9, 2021 · 6 comments
Closed

Comments

@TobiasEl
Copy link

TobiasEl commented May 9, 2021

Hi.
I want to create a NARX (Nonlinear Autoregressive with exogenous variables) model based on LM (Levenberg Marquardt) method.

Since this two methods are not implemented in keras, I search for the library fireTs https://pypi.org/project/fireTS/ (for NARX) and your implementation of LM and I'm trying to combine them. This is the code:

import tensorflow as tf
import numpy as np
import levenberg_marquardt as lm
from fireTS.models import NARX

input_size = 20000
batch_size = 1000

x_train = np.linspace(-1, 1, input_size, dtype=np.float64)
y_train = np.sinc(10 * x_train)

x_train = tf.expand_dims(tf.cast(x_train, tf.float32), axis=-1)
y_train = tf.expand_dims(tf.cast(y_train, tf.float32), axis=-1)

train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(input_size)
train_dataset = train_dataset.batch(batch_size).cache()
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)

model = tf.keras.Sequential([
tf.keras.layers.Dense(20, activation='tanh', input_shape=(1,)),
tf.keras.layers.Dense(1, activation='linear')])

model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.MeanSquaredError())

model_wrapper = lm.ModelWrapper(model)

model_wrapper.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=1.0),
loss=lm.MeanSquaredError())

mdl1 = NARX(
model_wrapper,
auto_order=2,
exog_order=[2, 2],
exog_delay=[1, 1])

mdl1.fit(train_dataset,epoch=10)
ypred1 = mdl1.predict(x=x_test, y=y_test)

ypred1

And I'm having this error:

AttributeError Traceback (most recent call last)

in ()
37 auto_order=2,
38 exog_order=[2, 2],
---> 39 exog_delay=[1, 1])
40
41 mdl1.fit(train_dataset,epoch=10)

2 frames

/usr/local/lib/python3.7/dist-packages/fireTS/core.py in init(self, base_estimator, **base_params)
14
15 def init(self, base_estimator, **base_params):
---> 16 self.base_estimator = base_estimator.set_params(**base_params)
17
18 def set_params(self, **params):

AttributeError: 'ModelWrapper' object has no attribute 'set_params'

Any solution?

@TobiasEl TobiasEl changed the title Combine fireTS library for NARX network based on Levenberg Marquardt Combine fireTS library for NARX network with Levenberg Marquardt May 9, 2021
@fabiodimarco
Copy link
Owner

Hi, I think that the NARX base_estimator is not intended to be a keras model.
If I look into the comments of the NARX base class TimeSeriesRegressor the comment says:
base_estimator must be a model which implements the scikit-learn APIs.

Do you have any working code that use standard keras fit function?
If can provide me with that, I can help you using the levenberg-marquardt optimizer.

@TobiasEl
Copy link
Author

Hi.
I need to create a NARX method (based on your LM). Do you know some implememtation of NARX with keras to use your LM?

@fabiodimarco
Copy link
Owner

fabiodimarco commented May 10, 2021

I was not able to find any open source keras implementation of NARX. If you want to use the LM training algorithm that I implemented I think you need to implement your own version of NARX using tensorflow keras api. Or in alternative you can try other model architectures that are already implemented in tensorflow (RNN / CNN).
What is your goal? and why do you want to use LM to train your model? Have you already tried first order methods (e.g. SGD, Adam etc.)?

@TobiasEl
Copy link
Author

TobiasEl commented May 10, 2021

Hi.
Thanks for your help. But it must be NARX based on LM. It is because we want to emulate a paper method: https://www.mdpi.com/2227-7390/8/2/241/html . So it must be NARX under LM

I have found this: https://stackoverflow.com/questions/53087669/narx-implementation-using-keras

I made some changes to the last version of tf and keras. I comine with your implementation. And it looks like this:


import tensorflow as tf
import numpy as np
import levenberg_marquardt as lm
import numpy as np

from tensorflow import keras

import matplotlib.pyplot as plt
numPreviousSteps = 8
inputShape = (None, numPreviousSteps + 2)

class Narx(keras.Model):

    def __init__(self):
        super(Narx, self).__init__(name='narx')
        self.dense = keras.layers.Dense(10, input_shape=inputShape,
                                        activation=keras.activations.tanh)
        self.outputLayer = keras.layers.Dense(1, activation=keras.activations.linear)

    def call(self, inputs, training = False):
        if (training):
            x = self.dense(inputs)
            return self.outputLayer(x)
        else: # TODO: what should the network do when used for prediction
            x = self.dense(inputs)
            return self.outputLayer(x)


model = Narx()
model.compile(optimizer=keras.optimizers.RMSprop(0.001),
              loss=tf.losses.mean_squared_error,
              metrics=tf.metrics.mean_absolute_error)

# input data generation
numTsSamples = 1000

# time series to learn from
y = np.random.random((numTsSamples + numPreviousSteps + 1,))
x = np.random.random((numTsSamples,)) # exogenous input

# creation of tapped delay
data = [np.roll(y, -i)[:numTsSamples] for i in range(numPreviousSteps, -1, -1)]
data = [x] + data

# training data
data = np.stack(data, axis=1)

# expected results
yNext = y[numPreviousSteps : -1]

model = tf.keras.Sequential([
    tf.keras.layers.Dense(20, activation='tanh', input_shape=(1,)),
    tf.keras.layers.Dense(1, activation='linear')])

model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
    loss=tf.keras.losses.MeanSquaredError())

model_wrapper = lm.ModelWrapper(model)

model_wrapper.compile(
    optimizer=tf.keras.optimizers.SGD(learning_rate=1.0),
    loss=lm.MeanSquaredError())

# model training
model_wrapper.fit(data, yNext)
model_wrapper.predict(y_train)

And it gives me this dimension error:

WARNING:tensorflow:Model was constructed with shape (None, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1), dtype=tf.float32, name='dense_20_input'), name='dense_20_input', description="created by layer 'dense_20_input'"), but it was called on an input with incompatible shape (None, 10).


ValueError Traceback (most recent call last)

in ()
64
65 # model training
---> 66 model_wrapper.fit(data, yNext)

20 frames

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
257 ' incompatible with the layer: expected axis ' + str(axis) +
258 ' of input shape to have value ' + str(value) +
--> 259 ' but received input with shape ' + display_shape(x.shape))
260 # Check shape.
261 if spec.shape is not None and shape.rank is not None:

ValueError: Input 0 of layer dense_20 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 10)

I think we're getting close. I'm sorry if I'm being too annoying.

@fabiodimarco
Copy link
Owner

In the above code you created a Narx class instance (which has never been used) and then you replaced it with a feedforward neural network with wrong input shape.
Based on the code in the stackoverflow comment here is a version using LM trainer:

import tensorflow as tf
import numpy as np
import levenberg_marquardt as lm

num_previous_steps = 8

# input data generation
numTsSamples = 1000

# time series to learn from
y = np.random.random((numTsSamples + num_previous_steps + 1,))
x = np.random.random((numTsSamples,))  # exogenous input

# creation of tapped delay
data = [np.roll(y, -i)[:numTsSamples] for i in range(num_previous_steps, -1, -1)]
data = [x] + data

# training data
data = np.stack(data, axis=1)

# expected results
y_next = y[num_previous_steps: -1]

# model training
model = tf.keras.Sequential([
    tf.keras.layers.Dense(20, activation='tanh',
                          input_shape=(num_previous_steps + 2,)),
    tf.keras.layers.Dense(1, activation='linear')])

# build the model
_ = model(data)

model_wrapper = lm.ModelWrapper(model)
model_wrapper.compile(
    optimizer=tf.keras.optimizers.SGD(learning_rate=1.0),
    loss=lm.MeanSquaredError())

model_wrapper.fit(data, y_next, epochs=10)

out = model.predict(data)

The code that you provided is just a feedforward neural network where the input data are organized to form a NARX model.
It is missing the main part of NARX, which is the autoregressive output prediction.

@TobiasEl
Copy link
Author

Thank you so much for helping me.

It is missing the main part of NARX, which is the autoregressive output prediction.

I have never work with NARX, so I don't really know how to implement it, just searching code or libreries. I don't know how to specify that autoregressive output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants