ECE5550: Applied Kalman Filtering

Hello, if you have any need, please feel free to consult us, this is my wechat: wx91due


ECE5550: Applied Kalman Filtering

KALMAN FILTER GENERALIZATIONS

5.1: Maintaining symmetry of covariance matrices

■ The Kalman filter as described so far is theoretically correct, but has known vulnerabilities and limitations in practical implementations.

■ In this unit of notes, we consider the following issues:

1. Improving numeric robustness;

2. Sequential measurement processing and square-root filtering;

3. Dealing with auto- and cross-correlated sensor or process noise;

4. Extending the filter to prediction and smoothing;

5. Reduced-order filtering;

6. Using residue analysis to detect sensor faults.

Improving numeric robustness

■ Within the filter, the covariance matrices  and  must remain

1. Symmetric, and

2. Positive definite (all eigenvalues strictly positive).

■ It is possible for both conditions to be violated due to round-off errors in a computer implementation.

■ We wish to find ways to limit or eliminate these problems.

Dealing with loss of symmetry

■ The cause of covariance matrices becoming asymmetric or non-positive definite must be due to either the time-update or measurement-update equations of the filter.

■ Consider first the time-update equation:

• Because we are adding two positive-definite quantities together, the result must be positive definite.

• A “suitable implementation” of the products of the matrices will avoid loss of symmetry in the final result.

■ Consider next the measurement-update equation:

■ Theoretically, the result is positive definite, but due to the subtraction operation it is possible for round-off errors in an implementation to

result in a non-positive-definite solution.

■ The problem may be mitigated in part by computing instead

• This may be proven correct via

■ With a “suitable implementation” of the products in the  term,

symmetry can be guaranteed. However, the subtraction may still give a non-positive definite result if there is round-off error.

■ A better solution is the Joseph form covariance update.

• This may be proven correct via

■ Because the subtraction occurs in the “squared” term, this form “guarantees” a positive definite result.

■ If we end up with a negative definite matrix (numerics), we can    replace it by the nearest symmetric positive semidefinite matrix.

■ Omitting the details, the procedure is:

• Calculate singular-value decomposition: Σ = USV.

• Compute H VSV.

• Replace Σ with (Σ + Σ H HT )/4.

5.2: Sequential processing of measurements

■ There are still improvements that may be made. We can:

• Reduce the computational requirements of the Joseph form,

• Increase the precision of the numeric accuracy.

■ One of the computationally intensive operations in the Kalman filter is

the matrix inverse operation in 

■ Using matrix inversion via Gaussian elimination (the most

straightforward approach), is an O(m3) operation, where m is the dimension of the measurement vector.

■ If there is a single sensor, this matrix inverse becomes a scalar division, which is an O(1) operation.

■ Therefore, if we can break them measurements intom single-sensor measurements and update the Kalman filter that way, there is

opportunity for significant computational savings.

Sequentially processing independent measurements

■ We start by assuming that the sensor measurements are independent. That is, that

■ We will use colon “:” notation to refer to the measurement number. For example, z k :1  is the measurement from sensor 1 at time k.

■ Then, the measurement is

where Ck(T):1  is the first row of Ck (for example), and vk :1  is the sensor noise of the first sensor at time k, for example.

■ We will consider this a sequence of scalar measurements z k :1 ... z k:, and update the state estimate and covariance estimates in m steps.

■ We initialize the measurement update process with  and 

■ Consider the measurement update for the ith measurement, z k:i

= E[xk | Zk — 1 , z k :1 ... z k:i — 1] 十 Lk:(z k:— E[z k | Zk — 1 , z k :1 ... z k:i — 1])

■ Generalizing from before

■ Next, we recognize that the variance of the innovation corresponding to measurement z k:i is

■ The corresponding gain is  and the updated state is

with covariance

■ The covariance update can be implemented as

■ An alternative update is the Joseph form,

■ The final measurement update gives  and 

Sequentially processing correlated measurements

■ The above process must be modified to accommodate the situation where sensor noise is correlated among the measurements.

■ Assume that we can factor the matrix Σ = SvS , where Sv  is a lower-triangular matrix (for symmetric positive-definite Σv, we can) .

• The factor Sv  is a kind of a matrix square root, and will be important in a number of places in this course.

• It is known as the “Cholesky” factor of the original matrix.

• In MATLAB,                Sv = chol(SigmaV,'lower');

• Be careful: MATLAB’s default answer (without specifying “lower”) is an upper-triangular matrix, which is not what we’re after.

■ The Cholesky factor has strictly positive elements on its diagonal (positive eigenvalues), so is guaranteed to be invertible.

■ Consider a modification to the output equation of a system having correlated measurements

• Note that we will use the “bar” decoration (-) frequently in this chapter of notes.

• It rarely (if ever) indicates the mean of that quantity.

• Rather, it refers to a definition having similar meaning to the original symbol.

• For example, z-is a (computed) output value, similar in interpretation to the measured output value zk.

■ Consider now the covariance of the modified noise input 

■ Therefore, we have identified a transformation that de-correlates (and normalizes) measurement noise.

■ Using this revised output equation, we use the prior method.

■ We start the measurement update process with  and 

■ The Kalman gain is  and the updated state is

with covariance

(which may also be computed with a Joseph form update, for example).

■ The final measurement update gives  and 

LDL updates for correlated measurements

■ An alternative to the Cholesky decomposition for factoring the covariance matrix is the LDL decomposition

where Lv  is lower-triangular and Dv  is diagonal (with positive entries).

■ In MATLAB, [L,D] = ldl(SigmaV);

■ The Cholesky decomposition is related to the LDL decomposition via

■ Both texts show how to use the LDL decomposition to perform a sequential measurement update.

■ A computational advantage of LDL over Cholesky is that no square-root operations need betaken. (We can avoid finding Dv(1)/2.)

■ A pedagogical advantage of introducing the Colesky decomposition is that we use it later on.

发表评论

电子邮件地址不会被公开。 必填项已用*标注