# The Kalman Filter

These notes are based on

Meinhold R.J., Singpurwalla N.D., (1983): Understanding the Kalman Filter. The American Statistician, 37, 123–127.

## Model

The Kalman Filter provides an efficient recursive estimator for the unobserved state of a linear discrete time dynamical system in the presence of measurement error. Kalman (1960) first introduced the method in the Engineering literature, but it can be understood in the context of Bayesian inference.

Let ${y}_{t}$ denote a vector of observed variables at time $t$ and let ${s}_{t}$ denote the unobserved state variables of the system at time $t$. We wish to conduct inference about the state variables given only the observed data $\left\{{y}_{t}\right\}$ and the structure of a linear model consisting of a measurement equation and a transition equation.

The evolution of the observed variable depends on the state variables through a linear measurement equation

(1)${y}_{t}=F{s}_{t}+{\epsilon }_{t},\phantom{\rule{1em}{0ex}}{\epsilon }_{t}\sim N\left(0,{\Omega }_{\epsilon }\right).$

The variable ${y}_{t}$ is observed with measurement error which follows the Normal distribution with mean zero and covariance matrix ${\Omega }_{\epsilon }$.

The state vector ${s}_{t}$ obeys the transition equation

(2)${s}_{t}=G{s}_{t-1}+{\eta }_{t},\phantom{\rule{1em}{0ex}}{\eta }_{t}\sim N\left(0,{\Omega }_{\eta }\right)$

where $G$ and ${\Omega }_{\eta }$ are known matrices and ${\eta }_{t}$ captures the influence of effects that are outside the model on the state transition process. The noise terms ${\epsilon }_{t}$ and ${\eta }_{t}$ are independent. In general $G$ and $F$ can be time-dependent but for the sake of simplicity the time subscripts are omitted here.

The Kalman Filter is similar in nature to the standard linear regression model. The state of the process ${s}_{t}$ corresponds to the regression coefficients, however the state is not constant over time, requiring the introduction of the transition equation.

## Bayesian Interpretation

Let ${Y}_{t}=\left({y}_{t},{y}_{t-1},\dots ,{y}_{1}\right)$ denote the complete history of observed data at time $t$. Our goal is to obtain the posterior distribution of ${s}_{t}$ given ${Y}_{t}$. We know from Bayes’ Theorem that

(3)$\begin{array}{rl}\mathrm{Pr}\left({s}_{t}|{Y}_{t}\right)& =\frac{\mathrm{Pr}\left({y}_{t}|{s}_{t},{Y}_{t-1}\right)\mathrm{Pr}\left({s}_{t}|{Y}_{t-1}\right)}{\mathrm{Pr}\left({y}_{t}|{Y}_{t-1}\right)}\\ & \propto \mathrm{Pr}\left({y}_{t}|{s}_{t},{Y}_{t-1}\right)\mathrm{Pr}\left({s}_{t}|{Y}_{t-1}\right).\end{array}$

The left-hand side is the posterior distribution of ${s}_{t}$. On the second line, the first term is the likelihood of ${s}_{t}$ and the second term is the prior distribution of ${s}_{t}$. This equation defines a recursive Bayesian updating relationship.

At time $t-1$, our knowledge of the system is summarized by the the posterior distribution

(4)${s}_{t-1}|{Y}_{t-1}\sim N\left({\stackrel{^}{s}}_{t-1},{\Sigma }_{t-1}\right)$

where ${\stackrel{^}{s}}_{t-1}$ is our previous estimate about the mean of ${s}_{t-1}$. This process is initialized at time $0$ by specifying ${\stackrel{^}{s}}_{0}$ and ${\Sigma }_{0}$.

Before observing ${y}_{t}$, our best prediction of ${s}_{t}$ comes from (2), namely $G{s}_{t-1}+{\eta }_{t}$. However, combining this with (4), we have

(5)${s}_{t}|{Y}_{t-1}\sim N\left(G{\stackrel{^}{s}}_{t-1},{R}_{t}\right),$

where

(6)${R}_{t}\equiv G{\Sigma }_{t-1}{G}^{\top }+{\Omega }_{\eta }.$

This follows directly from the properties of the multivariate Normal distribution.

After observing ${y}_{t}$, we can update our knowledge about ${s}_{t}$ using the likelihood $\mathrm{Pr}\left({y}_{t}|{s}_{t},{Y}_{t-1}\right)$. Let ${e}_{t}$ denote the error in predicting ${y}_{t}$,

(7)${e}_{t}\equiv {y}_{t}-{\stackrel{^}{y}}_{t}={y}_{t}-FG{\stackrel{^}{s}}_{t-1}.$

Observing ${e}_{t}$ is equivalent to observing ${y}_{t}$ since $F$, $G$, and ${s}_{t-1}$ are all known. Thus, (3) becomes

(8)$\mathrm{Pr}\left({s}_{t}|{y}_{t},{Y}_{t-1}\right)=\mathrm{Pr}\left({s}_{t}|{e}_{t},{Y}_{t-1}\right)\propto \mathrm{Pr}\left({e}_{t}|{s}_{t},{Y}_{t-1}\right)\mathrm{Pr}\left({s}_{t}|{Y}_{t-1}\right).$

Now, using the measurement equation (1), we can write ${e}_{t}=F\left({s}_{t}-G{\stackrel{^}{s}}_{t-1}\right)+{\epsilon }_{t}$, and therefore

(9)${e}_{t}|{s}_{t},{Y}_{t-1}\sim N\left(F\left({s}_{t}-G{\stackrel{^}{s}}_{t-1}\right),{\Omega }_{\epsilon }\right).$

Now, from Bayes’ Theorem the posterior distribution of ${s}_{t}$ satisfies

(10)$\mathrm{Pr}\left({s}_{t}|{y}_{t},{Y}_{t-1}\right)=\frac{\mathrm{Pr}\left({e}_{t}|{s}_{t},{Y}_{t-1}\right)\mathrm{Pr}\left({s}_{t}|{Y}_{t-1}\right)}{\int \mathrm{Pr}\left({e}_{t},{s}_{t}|{Y}_{t-1}\right)\phantom{\rule{thinmathspace}{0ex}}{\mathrm{ds}}_{t}}.$

Once this probability is computed, we can perform another iteration of the recursion by going back to (3).

## Calculating the Posterior Distribution

We can calculate the posterior distribution (10) directly by appealing to the properties of the Normal Distribution. Note that

(11)$\left(\begin{array}{c}{s}_{t}\\ {e}_{t}\end{array}\right)|{Y}_{t-1}\sim N\left[\left(\begin{array}{c}G{s}_{t-1}\\ 0\end{array}\right),\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{cc}R& {R}_{t}{F}^{\top }\\ F{R}_{t}& {\Omega }_{\epsilon }+F{R}_{t}{F}^{\top }\end{array}\right)\right].$

where ${R}_{t}$ is given by (6). Conditional on ${e}_{t}$, the distribution of ${s}_{t}$ is

(12)${s}_{t}|{e}_{t},{Y}_{t-1}\sim N\left[G{\stackrel{^}{s}}_{t-1}{R}_{t}{F}^{\top }\left({\Omega }_{\epsilon }+F{R}_{t}{F}^{\top }{\right)}^{-1}{e}_{t},\phantom{\rule{thinmathspace}{0ex}}{R}_{t}-{R}_{t}{F}^{\top }\left({\Omega }_{\epsilon }+F{R}_{t}{F}^{\top }{\right)}^{-1}F{R}_{t}\right].$

To summarize, the posterior distribution of ${s}_{t}$ was be calculated recursively by first choosing initial values for ${s}_{0}$ and ${\Sigma }_{0}$. Then at each period $t$, given the posterior distribution of ${s}_{t-1}$, with mean ${\stackrel{^}{s}}_{t-1}$ and covariance matrix ${\Sigma }_{t-1}$ as in (4), we form a prior for ${s}_{t}$ with mean $G{\stackrel{^}{s}}_{t-1}$ and variance ${R}_{t}=G{\Sigma }_{t-1}{G}^{\top }+{\Omega }_{\eta }$ as in (5), evaluate the likelihood in (9) given ${e}_{t}={y}_{t}-FG{\stackrel{^}{s}}_{t-1}$, and then arrive at the posterior in time $t$ given by (12).

## Algorithm

Using the theoretical derivation as a guide, we can implement the Kalman Filter as a recursive algorithm. Given initialization values ${s}_{0}$ and ${\Sigma }_{0}$, at time $t$,

1. The posterior distribution at time $t-1$ is Normal with mean ${\stackrel{^}{s}}_{t-1}$ and covariance matrix ${\Sigma }_{t-1}$.

2. Form the covariance matrix of the prior distribution, ${R}_{t}=G{\Sigma }_{t-1}{G}^{\top }+{\Omega }_{\eta };$

3. Calculate the mean of the posterior, ${\stackrel{^}{s}}_{t}=G{\stackrel{^}{s}}_{t-1}+{R}_{t}{F}^{\top }\left({\Omega }_{\epsilon }+F{R}_{t}{F}^{\top }{\right)}^{-1}{e}_{t};$

4. Calculate the variance of the posterior: ${\Sigma }_{t}={R}_{t}-{R}_{t}{F}^{\top }\left({\Omega }_{\epsilon }+F{R}_{t}{F}^{\top }{\right)}^{-1}F{R}_{t}.$