Sliding window order-recursive least-squares algorithms (original) (raw)

The generalized sliding window recursive least-square (GSW RLS)

1995

In this paper, we derive a new RLS algorithm: the Generalized Sliding Window RLS GSW RLS algorithm that has a better tracking ability than the SWC RLS algorithm. This algorithm uses a generalized window which consists of an exponential window for the L 0 most recent data and the same but attenuatedexponential window for the rest of the data. We give a theoritical proof that the use of this window leads to a better compromise between the Excess Mean Squared Errors due to estimation noise and lag noise. Furthermore, after providing a fast version of the GSW RLS algorithm, namely the GSW FTF algorithm, we apply the Subsampled-Upadating technique to derive the FSU GSW FTF algorithm, a doubly-fast version of the GSW RLS algorithm.

The generalized sliding window recursive least-squares (SGW RLS) algorithm

1995

In this paper, we derive a new RLS algorithm: the Generalized Sliding Window RLS (GSW RLS) algorithm that has a better tracking ability than the SWC RLS algorithm. This algorithm uses a generalized window which consists of an exponential window for theL0 most recent data and the same but attenuatedexponential window for the rest of the data. We give a theoritical proof that the use of this window leads to a better compromise between the Excess Mean Squared Errors due to estimation noise and lag noise. Furthermore, after providing a fast version of the GSW RLS algorithm, namely the GSW FTF algorithm, we apply the Subsampled-Upadating technique to derive the FSU GSW FTF algorithm, a doubly-fast version of the GSW RLS algorithm.

An adaptive LS algorithm based on orthogonal Householder transformations

Proceedings of Third International Conference on Electronics, Circuits, and Systems, 1996

This paper presents an adaptive exponentially weighted algorithm for least squares (LS) system identification. The algorithm updates an inverse "square root" factor of the input data correlation matrix, by applying numerically robust orthogonal Householder transformations. The scheme avoids, almost entirely, costly square roots and divisions (present in other numerically well behaved adaptive LS schemes) and provides directly the estimates of the unknown system coefficients. Furthermore, it offers enhanced parallelism, which leads to efficient implementations. A square array architecture for implementing the new algorithm, which comprises simple operating blocks, is described. The numerically robust behaviour of the algorithm is demonstrated through simulations.

Analysis of a recursive least squares hyperbolic rotation algorithm for signal processing

Linear Algebra and its Applications, 1988

The application of hyperbolic plane rotations to the least squares downdating problem arising in windowed recursive least squares signal processing is studied. A forward error analysis is given to show that this algorithm can be expected to perform well in the presence of rounding errors, provided that the problem is not too ill

Recursive triangular array ladder algorithms

IEEE Transactions on Signal Processing, 1991

Two recursive least squares (RLS) ladder algorithms for implementation on triangular systolic arrays are presented. The first algorithm computes transversal forward/hackward predictor coefficients, ladder reflection coefficients, and forward/backward residual energies. This algorithm has a complexity of three multiplications and additions per "rotational" (triangular array) element. A second algorithm is presented that facilitates the computation of only the ladder reflection coefficients and the forward/backward residual energies at a cost of two multiplications and additions per "rotational" element. This way, both algorithms are computationally more efficient than the traditional recursive QR decomposition (Gentleman and Kung array) for any order. The second algorithm is more efficient than Cioffi's pipelineable linear array fast QR adaptive filter for an order of less than 22 in the prewindowed case, and more efficient than the fast QR for an order of less than 43 in the more general covariance case. A comparison of the presented new algorithms and the prominent QR methods is given in the paper.

A revisit to block and recursive least squares for parameter estimation

Computers & Electrical Engineering, 2004

In this paper, the classical least squares (LS) and recursive least squares (RLS) for parameter estimation have been reexamined in the light of the present day computing capabilities. It has been demonstrated that for linear time-invariant systems, the performance of blockwise least squares (BLS) is always superior to that of RLS. In the context of parameter estimation for dynamic systems, the current computational capability of personal computers are more than adequate for BLS. However, for timevarying systems with abrupt parameter changes, standard blockwise LS may no longer be suitable due to its inefficiency in discarding ''old'' data. To deal with this limitation, a novel sliding window blockwise least squares approach with automatically adjustable window length triggered by a change detection scheme is proposed. Two types of sliding windows, rectangular and exponential, have been investigated. The performance of the proposed algorithm has been illustrated by comparing with the standard RLS and an exponentially weighted RLS (EWRLS) using two examples. The simulation results have conclusively shown that: (1) BLS has better performance than RLS; (2) the proposed variablelength sliding window blockwise least squares (VLSWBLS) algorithm can outperform RLS with forgetting factors; (3) the scheme has both good tracking ability for abrupt parameter changes and can ensure the high accuracy of parameter estimate at the steady-state; and (4) the computational burden of

New fast QR decomposition least squares adaptive algorithms

IEEE Transactions on Signal Processing, 1998

This paper presents two new, closely related adaptive algorithms for LS system identification. The starting point for the derivation of the algorithms is the inverse Cholesky factor of the data correlation matrix, obtained via a QR decomposition (QRD). Both algorithms are of O(p) computational complexity, with p being the order of the system. The first algorithm is a fixed order QRD scheme with enhanced parallelism. The second is an order recursive lattice type algorithm based exclusively on orthogonal Givens rotations, with lower complexity compared to previously derived ones. Both algorithms are derived following a new approach, which exploits efficient time and order updates of a specific state vector quantity. Index Terms-Adaptive algorithms, fast algorithms. I. INTRODUCTION A DAPTIVE least squares algorithms for system identification [1]-[7] are popular due to their fast converging properties and are used in a variety of applications, such as channel equalization, echo cancellation, spectral analysis, and control, to name but a few. Among the various efficiency issues characterizing the performance of an algorithm, those of computational complexity, parallelism, and numerical robustness are of particular importance, especially in applications where medium to long filter lengths are required. It may sometimes be preferable to use an algorithm of higher complexity but with good numerical error robustness since this may allow its implementation with shorter wordlenghts and fixed point arithmetic. This has led to the development of a class of adaptive algorithms, based on the numerically robust QR factorization of the input data matrix via the Givens rotation approach [23]. The development of Givens rotations-based QR decomposition algorithms has evolved along three basic directions. Schemes of complexity per time iteration were the first to be derived, with being the order of the system [8], [9]. These schemes update the Cholesky factor of the input data correlation matrix and can efficiently be implemented on two-dimensional (2-D) systolic arrays. Furthermore, as it is shown in [9], the modeling error can be extracted directly without it being necessary to compute explicitly the estimates of the transversal parameters of the unknown FIR system.

The fast subsampled-updating recursive least-squares (FSU RLS) algorithm for adaptive filtering based on displacement structure and the FFT

Signal Processing, 1994

In this paper, we derive a new fast algorithm for Recursive Least-Squares RLS adaptive ltering. This algorithm is especially suited for adapting very long lters such as in the acoustic echo cancellation problem. The starting point is to introduce subsampled updating SU in the RLS algorithm. In the SU RLS algorithm, the Kalman gain and the likelihood variable are matrices. Due to the shift invariance of the adaptive FIR ltering problem, these matrices exhibit a low displacement rank. This leads to a representation of these quantities in terms of sums of products of triangular Toeplitz matrices. Finally, the product of these Toeplitz matrices with a vector can be computed e ciently by using the Fast Fourier Transform FFT. Zusammenfassung Dieser Artikel beschreibt die Herleitung eines neuen Algorithmus zur schnellen adaptiven Recursive Least Square RLS Filterung. Dieser Algorithmus eignet sich besonders f uer aufwendige Filter, wie sie zum Beispiel zur akkustischen Echounterdr ueckung benutzt werden. Im Zentrum dieses Algorithmus steht die Einf uehrung von unterabgetastetem Updating SU. Der Kalman Gewinn und die Likelihood Variable treten im SU RLS Algorithmus als Matrizen auf. Aufgrund der Verschiebungsinvarianz in der adaptiven FIR Filterung zeigen diese Matrizen einen niedrigen Verschiebungsrang. Dies f uehrt zu einer Darstellung dieser Gr oessen als Summe von Produkten von triangul aeren Toeplitz Matrizen. Das Produkt dieser Matrizen mit einem Vektor k ann auf sehr e ziente Weise mit der Fast Fourier Transform FFT berechnet werden. R esum e Dans ce papier, nous pr esentons un nouvel algorithme des moindres carr es r ecursif rapide. Cet algorithme pr esente un int erĂȘt certain pour l'adaptation de ltres tr es longs comme ceux utilis es dans les probl emes d'annulation d' echo acoustique. L'id ee de d epart est d'utiliser l'algorithme RLS avec une mise a jour sous-echantillonn ee" du ltre. Dans cet algorithme le SU RLS le gain de Kalman et la variable de vraisemblance sont des matrices qui ont des rangs de d eplacement faibles. Ces quantit es sont alors repr esent ees et mises a jour par le biais de leurs g en erateurs, sous forme de sommes de produits de matrices de Toeplitz triangulaires. Le produit de l'une de ces quantit es avec un vecteur peut alorsĂȘtre calcul e en utilisant la transform ee de Fourier rapide FFT.

An Improved Gain Vector to Enhance Convergence Characteristics of Recursive Least Squares Algorithm

The Recursive Least Squares (RLS) algorithm is renowned for its rapid convergence but in some scenarios it fails to show swiftness required by several applications. Such failure may result due to different limiting conditions. Gain vector plays an essential role in the performance of RLS algorithm. This paper proposes a modification in Gain vector that results in RLS algorithm performing much better in perspective of convergence, without adding significant complexity. Simulation results are presented which prove the authenticity of the finding, and comparison with conventional RLS algorithm is presented.