Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameter estimation #6

Closed
sursu opened this issue Sep 5, 2019 · 5 comments
Closed

Parameter estimation #6

sursu opened this issue Sep 5, 2019 · 5 comments

Comments

@sursu
Copy link

sursu commented Sep 5, 2019

Does this package implement parameter estimation for Kalman filters?

@oseiskar
Copy link
Owner

oseiskar commented Sep 6, 2019

Quite limited. There exists an (undocumented) implementation of the EM algorithm to estimate to process and observation noises (demonstrated in this example). However, this is not always (or even usually) the best way to estimate the parameters.

@oseiskar
Copy link
Owner

Closing as this is not a clear issue or feature request

@sursu
Copy link
Author

sursu commented Sep 22, 2019

In that example, I see where EM is performed, but I don't see how to access the estimated parameters.

However, this is not always (or even usually) the best way to estimate the parameters.

What would you say is the best way to estimate the parameters?

@oseiskar
Copy link
Owner

oseiskar commented Sep 30, 2019

In that example, I see where EM is performed, but I don't see how to access the estimated parameters.

Short version: In an 1d setting, you can simply access the EM-estimated KF parameters as follows

kf.process_noise[0, ...]      # process noise covariance matrix Q
kf.observation_noise[0, ...]  # observation noise covariance matrix R

Longer version: This is not really documented. The parameters of the Kalman filter are member variables with the same names as its constructor parameters, e.g., kf.observation_noise gives the matrix R.

Here's a complete example

import simdkalman
import numpy as np
import numpy.random as random

n_time_series = 10 # = N

kf = simdkalman.KalmanFilter(
    state_transition = np.array([[1,1],[0,1]]),
    process_noise = np.diag([0.1, 0.01]),
    observation_model = np.array([[1,0]]),
    observation_noise = 1.0)

print('initial process noise covariance matrix')
print(kf.process_noise)
print('initial observation noise covariance matrix')
print(kf.observation_noise)

rand = lambda: random.normal(size=(n_time_series,200))
data = np.cumsum(np.cumsum(rand()*0.02, axis=1) + rand(), axis=1) + rand()*3

kf = kf.em(data, n_iter=10)

for i in range(n_time_series):
    print('--- time series %d / %d' % (i+1, n_time_series))

    print('estimated process noise covariance matrix')
    print(kf.process_noise[i, ...])

    print('estimated observation noise covariance matrix')
    print(kf.observation_noise[i, ...])

The tricky part is that after running the EM algorithm, which assumes the "multiple independent time series" setting, each time series has its own R so kf.observation_noise is a 3d array of size (N, m, m) where N is the number of independent time series.

Note that even if you set n_time_series = 1 or would use an 1d Numpy array for data, the KF parameters would be 3d Numpy arrays, e.g., of size (1,2,2) in this example. This is why [0, ...] is needed in the 1d setting.

@oseiskar
Copy link
Owner

oseiskar commented Sep 30, 2019

However, this is not always (or even usually) the best way to estimate the parameters.

What would you say is the best way to estimate the parameters?

I would say it's usually best to select a model and parameters that optimize the performance / accuracy (however that metric is defined) of your algorithm when compared to ground truth / cross-validation data. For example, if you try to predict a time series, do "backtesting" (there are many variants). If EM algorithm gives you good results in in cross-validation / ground truth comparison, then go for it. However, keep in mind that this is not always the case.

Kalman filters can be used in many (physical) problems where the underlying state space model and/or the noise are not really linear / Gaussian, but using a Kalman filter anyway yields good results. Then you are in a realm where any guarantees of optimality of the EM algorithm output are on shaky ground. Even manually picking values that just "look good" could work better than the EM algorithm. I also don't know of / haven't worked with problems where trying to automatically select the A (state space transition) matrix using EM or other methods would be a good idea. For example, the matrix A could be directly derived from physics or designed to have a certain periodicity (like a weekly period).

Another problem also called "parameter estimation" is related to free parameters that may, for example, be augmented to the Kalman filter state, which is a bit different than trying to automatically estimate the matrices Q and R.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants