Blericker74

2022-07-05

If I have $m$ measurements to estimate an $n$ dimensional state vector, and I am using Kalman filter to do the filtering, then: Should I put all the $m$ measurements together in the measurement matrix (measurement transformation matrix ) and perform the filtering or, should I filter each measurement sequentially? Please provide some supporting explanation for your choice.

For e.g: Let $m=2$ and $n=3$. The state vector is 3 dimensional and we need to use the two measurements to get the posterior estimate the state vector. Now I can use one of these two methods:

1. Use all these measurements together to form a gain matrix of size $3\times 2$.

2. Use one measurement at one time and perform the filtering two times. The gain matrix will be $3\times 1$ in this case.

Which of the two methods is a better choice?

For e.g: Let $m=2$ and $n=3$. The state vector is 3 dimensional and we need to use the two measurements to get the posterior estimate the state vector. Now I can use one of these two methods:

1. Use all these measurements together to form a gain matrix of size $3\times 2$.

2. Use one measurement at one time and perform the filtering two times. The gain matrix will be $3\times 1$ in this case.

Which of the two methods is a better choice?

Karla Hull

Beginner2022-07-06Added 20 answers

Short Answer:

This is an interesting question that I have wondered myself, but I have not worked out this problem. My intuition says that both approaches are equivalent as long as the measurements are uncorrelated. The reason is that if the measurements are correlated, then you must include the off diagonal terms of the covariance matrix. This wouldn't be possible if the updates are performed one after another.

However, we can know once and for all if we work out the equations, which shouldn't be too difficult. Here, we will consider scalar measurements for simplicity, but the concept applied for vector measurements as well.

Long Answer:

I suggest looking at the information form of the Kalman filter (KF) and extended Kalman filter (EKF) for more insight in this respect. The information matrix (i.e., inverse of covariance matrix) of the information form can be written in a single equation as follows where the time indices are omitted for clarity:

$\begin{array}{r}{P}^{-1}=(F{P}^{-1}{F}^{T}+Q{)}^{-1}+{H}^{T}{R}^{-1}H\end{array}$

where $P$ is the covariance matrix, $F$ and $H$ are the Jacobians for the process and observation models, and $Q$ and $R$ are the covariance matrices for the process and observation noises.

Case 1 (Sequential Updates)

Let $A=\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}$ represent the Jacobian for the first measurement with measurement function ${h}_{A}$, and let $B=\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}}$ represent the Jacobian for the second measurement with measurement function ${h}_{B}$. If we perform sequential updates, then we have the following:

$\begin{array}{r}{P}^{-1}=(F{P}^{-1}{F}^{T}+Q{)}^{-1}+{A}^{T}{R}_{A}^{-1}A+{B}^{T}{R}_{B}^{-1}B\end{array}$

Now, assume the measurement is a scalar, then

${A}^{T}{R}_{A}^{-1}A=\left[\begin{array}{ccc}{a}_{00}& \cdots & {a}_{0n}\\ \vdots & \ddots & \vdots \\ {a}_{n0}& \cdots & {a}_{nn}\end{array}\right],\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{B}^{T}{R}_{B}^{-1}B=\left[\begin{array}{ccc}{b}_{00}& \cdots & {b}_{0n}\\ \vdots & \ddots & \vdots \\ {b}_{n0}& \cdots & {b}_{nn}\end{array}\right]$

where

${a}_{ij}={R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{j}},\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{b}_{ij}={R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{j}}$

Therefore,

$\begin{array}{r}{A}^{T}{R}^{-1}A+{B}^{T}{R}^{-1}B=\left[\begin{array}{ccc}{a}_{00}+{b}_{00}& \cdots & {a}_{0n}+{b}_{0n}\\ \vdots & \ddots & \vdots \\ {a}_{n0}+{b}_{n0}& \cdots & {a}_{nn}+{b}_{nn}\end{array}\right]\end{array}$

Case 2 (Simultaneous Updates) Let h=[hA,hB]T represent the measurement function, but in this case, the measurement function is a vector-valued function; therefore, the Jacobian and is given as follows:

$H=\left[\begin{array}{ccc}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{0}}& \frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{1}}& \cdots \\ \frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{0}}& \frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{1}}& \cdots \end{array}\right]$

Now,

${H}^{T}{R}^{-1}H=\left[\begin{array}{ccc}{H}_{00}& \cdots & {H}_{0n}\\ \vdots & \ddots & \vdots \\ {H}_{n0}& \cdots & {H}_{nn}\end{array}\right]$

where

${H}_{ij}=\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{j}}{\textstyle (}{R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}+{R}_{BA}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}})+\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{j}}{\textstyle (}{R}_{AB}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}+{R}_{B}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}})$

where (with slight abuse of notation)

${R}^{-1}=\left[\begin{array}{cc}{R}_{A}^{-1}& {R}_{AB}^{-1}\\ {R}_{BA}^{-1}& {R}_{B}^{-1}\end{array}\right]$

So, if the measurements are uncorrelated (i.e., ${R}_{AB}^{-1}={R}_{BA}^{-1}=0$), then ${H}^{T}{R}^{-1}H={A}^{T}{R}^{-1}A+{B}^{T}{R}^{-1}B$, which means that applying the updates sequentially or simultaneously givens the same results. However, in general, you cannot apply the updates sequentially for correlated measurements.

This is an interesting question that I have wondered myself, but I have not worked out this problem. My intuition says that both approaches are equivalent as long as the measurements are uncorrelated. The reason is that if the measurements are correlated, then you must include the off diagonal terms of the covariance matrix. This wouldn't be possible if the updates are performed one after another.

However, we can know once and for all if we work out the equations, which shouldn't be too difficult. Here, we will consider scalar measurements for simplicity, but the concept applied for vector measurements as well.

Long Answer:

I suggest looking at the information form of the Kalman filter (KF) and extended Kalman filter (EKF) for more insight in this respect. The information matrix (i.e., inverse of covariance matrix) of the information form can be written in a single equation as follows where the time indices are omitted for clarity:

$\begin{array}{r}{P}^{-1}=(F{P}^{-1}{F}^{T}+Q{)}^{-1}+{H}^{T}{R}^{-1}H\end{array}$

where $P$ is the covariance matrix, $F$ and $H$ are the Jacobians for the process and observation models, and $Q$ and $R$ are the covariance matrices for the process and observation noises.

Case 1 (Sequential Updates)

Let $A=\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}$ represent the Jacobian for the first measurement with measurement function ${h}_{A}$, and let $B=\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}}$ represent the Jacobian for the second measurement with measurement function ${h}_{B}$. If we perform sequential updates, then we have the following:

$\begin{array}{r}{P}^{-1}=(F{P}^{-1}{F}^{T}+Q{)}^{-1}+{A}^{T}{R}_{A}^{-1}A+{B}^{T}{R}_{B}^{-1}B\end{array}$

Now, assume the measurement is a scalar, then

${A}^{T}{R}_{A}^{-1}A=\left[\begin{array}{ccc}{a}_{00}& \cdots & {a}_{0n}\\ \vdots & \ddots & \vdots \\ {a}_{n0}& \cdots & {a}_{nn}\end{array}\right],\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{B}^{T}{R}_{B}^{-1}B=\left[\begin{array}{ccc}{b}_{00}& \cdots & {b}_{0n}\\ \vdots & \ddots & \vdots \\ {b}_{n0}& \cdots & {b}_{nn}\end{array}\right]$

where

${a}_{ij}={R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{j}},\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{b}_{ij}={R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{j}}$

Therefore,

$\begin{array}{r}{A}^{T}{R}^{-1}A+{B}^{T}{R}^{-1}B=\left[\begin{array}{ccc}{a}_{00}+{b}_{00}& \cdots & {a}_{0n}+{b}_{0n}\\ \vdots & \ddots & \vdots \\ {a}_{n0}+{b}_{n0}& \cdots & {a}_{nn}+{b}_{nn}\end{array}\right]\end{array}$

Case 2 (Simultaneous Updates) Let h=[hA,hB]T represent the measurement function, but in this case, the measurement function is a vector-valued function; therefore, the Jacobian and is given as follows:

$H=\left[\begin{array}{ccc}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{0}}& \frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{1}}& \cdots \\ \frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{0}}& \frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{1}}& \cdots \end{array}\right]$

Now,

${H}^{T}{R}^{-1}H=\left[\begin{array}{ccc}{H}_{00}& \cdots & {H}_{0n}\\ \vdots & \ddots & \vdots \\ {H}_{n0}& \cdots & {H}_{nn}\end{array}\right]$

where

${H}_{ij}=\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{j}}{\textstyle (}{R}_{A}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}+{R}_{BA}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}})+\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{j}}{\textstyle (}{R}_{AB}^{-1}\frac{\mathrm{\partial}{h}_{A}}{\mathrm{\partial}{x}_{i}}+{R}_{B}^{-1}\frac{\mathrm{\partial}{h}_{B}}{\mathrm{\partial}{x}_{i}})$

where (with slight abuse of notation)

${R}^{-1}=\left[\begin{array}{cc}{R}_{A}^{-1}& {R}_{AB}^{-1}\\ {R}_{BA}^{-1}& {R}_{B}^{-1}\end{array}\right]$

So, if the measurements are uncorrelated (i.e., ${R}_{AB}^{-1}={R}_{BA}^{-1}=0$), then ${H}^{T}{R}^{-1}H={A}^{T}{R}^{-1}A+{B}^{T}{R}^{-1}B$, which means that applying the updates sequentially or simultaneously givens the same results. However, in general, you cannot apply the updates sequentially for correlated measurements.

Which expression has both 8 and n as factors???

One number is 2 more than 3 times another. Their sum is 22. Find the numbers

8, 14

5, 17

2, 20

4, 18

10, 12Perform the indicated operation and simplify the result. Leave your answer in factored form

$\left[\frac{(4x-8)}{(-3x)}\right].\left[\frac{12}{(12-6x)}\right]$ An ordered pair set is referred to as a ___?

Please, can u convert 3.16 (6 repeating) to fraction.

Write an algebraic expression for the statement '6 less than the quotient of x divided by 3 equals 2'.

A) $6-\frac{x}{3}=2$

B) $\frac{x}{3}-6=2$

C) 3x−6=2

D) $\frac{3}{x}-6=2$Find: $2.48\xf74$.

Multiplication $999\times 999$ equals.

Solve: (128÷32)÷(−4)=

A) -1

B) 2

C) -4

D) -3What is $0.78888.....$ converted into a fraction? $\left(0.7\overline{8}\right)$

The mixed fraction representation of 7/3 is...

How to write the algebraic expression given: the quotient of 5 plus d and 12 minus w?

Express 200+30+5+4100+71000 as a decimal number and find its hundredths digit.

A)235.47,7

B)235.047,4

C)235.47,4

D)234.057,7Find four equivalent fractions of the given fraction:$\frac{6}{12}$

How to find the greatest common factor of $80{x}^{3},30y{x}^{2}$?