part of QDaq (https://gitlab.com/qdaq/qdaq) - Qt-based Data Acquisition
Recursive least squares (RLS) refers to algorithms that recursively find the coefficients that minimize a weighted linear least squares cost function.
It is assumed that a measured time-dependent quantity
where
A least squares cost function is defined as
where
RLS algorithms seek to estimate
The solution of the least squares problem (for
where
When a new data point becomes available the matrix
The inverse of
Thus, in RLS algorithms we keep a copy of
The parameters are updated by
where
-
A system with input
$u(t)$ and output$y(t)=s\cdot u(t) + y_0$ , where$s$ and$y_0$ vary with time. RLS is used to obtain an estimate of$s$ and$y_0$ . -
A timeseries
$y(t)$ is fitted with a polynomial$f(t;\theta)=\theta_0 + \theta_1\cdot t$ to estimate the time-dependent parameters$(\theta_0,\theta_1)$ and thus obtain the derivative$dy/dt$
In this algorithm the weighting function is
where
The recursion is given by
$$ e(t) = y(t) - \theta^T(t-1)\cdot \phi(t) $$ $$ u = P(t-1)\cdot \phi(t) $$ $$ k = u / \left[ \lambda + \phi^T(t)\cdot u\right] $$ $$ \theta(t) = \theta(t-1) + k, e(t) $$ $$ P(t) = \left[P(t-1) - k \cdot u^T\right]/\lambda $$ $$ J(t) = \lambda J(t-1) + e^2(t) $$
The algorithm is still valid for
The square root algorithm (initially due to Potter (1963)) utilizes the fact that
thus the matrix
Such matrices can be written as
where
It can be easily shown that
with
The benefit of
The RLS with recursion for
$$ e(t) = y(t) - \theta^T(t-1)\cdot \phi(t) $$
$$ u = Q^T(t-1) \cdot \phi(t) $$
$$ \beta = \lambda + u^T \cdot u $$
$$ \alpha = 1 / \left[\beta + \sqrt{\beta,\lambda}\right] $$
$$ k = Q(t-1) \cdot u $$
$$ \theta(t) = \theta(t-1) + k , [e(t)/\beta] $$
$$ Q(t) = \left[Q(t-1) - \alpha , k \cdot u^T\right]/\sqrt{\lambda} $$
$$ J(t) = \lambda J(t-1) + e^2(t) $$
This algorithm is taken from Ljung & Soederstroem (1987) "Theory & Practice of Recursive Identification", p. 328
The sliding-block RLS uses the last
The updating sequence is given by
$$ e(t) = y(t) - \theta^T(t-1)\cdot \phi(t) $$
$$ u = P(t-1)\cdot \phi(t) $$
$$ \beta = \left[ 1 + \phi(t)^T \cdot u \right]^{-1} $$
$$ k = \beta , u $$
$$ \bar{\theta}(t) = \theta(t-1) + k, e(t) $$
$$ \bar{P}(t) = \left[P(t-1) - k \cdot u^T \right] $$
$$ e'(t) = y(t) - \theta(t)^T \cdot \phi(t) $$
$$ \bar{J}(t) = J(t-1) + e^2(t) , \beta,(1-\beta) + e'^2(t) $$
And the down-dating sequence is
$$ e(t-N) = y(t-N) - \bar{\theta}^T(t)\cdot \phi(t-N) $$
$$ u = \bar{P}(t)\cdot \phi(t-N) $$
$$ \beta = \left[ 1 - \phi(t-N)^T \cdot u \right]^{-1} $$
$$ k = \beta , u $$
$$ \theta(t) = \bar{\theta}(t) - k , e(t-N) $$
$$ P(t) = \left[\bar{P}(t) + k \cdot u^T\right] $$
$$ e'(t-N) = y(t) - \theta(t)^T\cdot \phi(t-N) $$
$$ J(t) = \bar{J}(t) - e^2(t-N) , \beta,(1-\beta) - e'^2(t-N) $$
Similar to the above, the block RLS algorithm can be formulated with update of the square root of
A time series
However, during long term operation of a polynomial RLS algorithm
A solution to this problem is to operate sychronously 2 RLS loops with a phase difference of
In sliding-window block RLS
This algorithm is implemented in the RLS::PolyRLS
C++ class which is defined in PolyRLS.h.
The algorithms are organized in a number of templated objects which are defined in 2 header files:
- RLS.h defines the RLS algorithms
- PolyRLS.h defines the PolyRLS object for fitting timeseries with polynomials
In the filter
folder there are 2 command line applications which can be used for testing and show most of the library features. More details can be found in the relevant README.
The project can be built with cmake using the following commands:
> mkdir .build
> cd .build
> cmake ..
> make / nmake
- Implement the
$U D U^T$ factorization of covariance matrix and the updating algorithm by Bierman (1977) - Document the C++ classes
- Zhang, Q. (2000). Some Implementation Aspects of Sliding Window Least Squares Algorithms. IFAC Proceedings Volumes, 33(15), 763–768. https://doi.org/https://doi.org/10.1016/S1474-6670(17)39844-0
- Lennart Ljung and Torsten Söderström (1987). Theory and practice of recursive identification, MIT Press
- G. J. Bierman (1977). Factorization Methods for Discrete Sequential Estimation, Elsevier, Academic Press
- P. E. Gill, G. H. Golub, W. Murray and M. A. Saunders (1974) Methods for modifying matrix factorizations. https://www.ams.org/journals/mcom/1974-28-126/S0025-5718-1974-0343558-6/
- @nickspanos
- Initial implementation of the algorithms
- @MatinaDt
- Implemented the use of Eigen library
- Tested Cholesky decomposition methods