The Kalman Filter

These notes are based on

Meinhold R.J., Singpurwalla N.D., (1983): Understanding the Kalman Filter. The American Statistician, 37, 123–127.

Model

The Kalman Filter provides an efficient recursive estimator for the unobserved state of a linear discrete time dynamical system in the presence of measurement error. Kalman (1960) first introduced the method in the Engineering literature, but it can be understood in the context of Bayesian inference.

Let y t denote a vector of observed variables at time t and let s t denote the unobserved state variables of the system at time t. We wish to conduct inference about the state variables given only the observed data {y t} and the structure of a linear model consisting of a measurement equation and a transition equation.

The evolution of the observed variable depends on the state variables through a linear measurement equation labelmeasurementy t=Fs t+ε t,ε tN(0,Ω ε). The variable y t is observed with measurement error which follows the Normal distribution with mean zero and covariance matrix Ω ε.

The state vector s t obeys the transition equation labeltransitions t=Gs t1+η t,η tN(0,Ω η) where G and Ω η are known matrices and η t captures the influence of effects that are outside the model on the state transition process. The noise terms ε t and η t are independent. In general G and F can be time-dependent but for the sake of simplicity the time subscripts are omitted here.

The Kalman Filter is similar in nature to the standard linear regression model. The state of the process s t corresponds to the regression coefficients, however the state is not constant over time, requiring the introduction of the transition equation.

Bayesian Interpretation

Let Y t=(y t,y t1,,y 1) denote the complete history of observed data at time t. Our goal is to obtain the posterior distribution of s t given Y t. We know from Bayes’ Theorem that labelbayesPr(s t|Y t) =Pr(y t|s t,Y t1)Pr(s t|Y t1)Pr(y t|Y t1) Pr(y t|s t,Y t1)Pr(s t|Y t1). The left-hand side is the posterior distribution of s t. On the second line, the first term is the likelihood of s t and the second term is the prior distribution of s t. This equation defines a recursive Bayesian updating relationship.

At time t1, our knowledge of the system is summarized by the the posterior distribution labelPosteriorLastPeriods t1|Y t1N(s^ t1,Σ t1) where s^ t1 is our previous estimate about the mean of s t1. This process is initialized at time 0 by specifying s^ 0 and Σ 0.

Before observing y t, our best prediction of s t comes from (eq:transition), namely Gs t1+η t. However, combining this with (eq:PosteriorLastPeriod), we have labelBayesianPriors t|Y t1N(Gs^ t1,R t), where labelBayesRR tGΣ t1G +Ω η. This follows directly from the properties of the multivariate Normal distribution.

After observing y t, we can update our knowledge about s t using the likelihood Pr(y t|s t,Y t1). Let e t denote the error in predicting y t, labelerrore ty ty^ t=y tFGs^ t1. Observing e t is equivalent to observing y t since F, G, and s t1 are all known. Thus, (eq:bayes) becomes labelbayes2Pr(s t|y t,Y t1)=Pr(s t|e t,Y t1)Pr(e t|s t,Y t1)Pr(s t|Y t1).

Now, using the measurement equation (eq:measurement), we can write e t=F(s tGs^ t1)+ε t, and therefore labelBayesianLikelihoode t|s t,Y t1N(F(s tGs^ t1),Ω ε). Now, from Bayes’ Theorem the posterior distribution of s t satisfies labelBayesianPosteriorPr(s t|y t,Y t1)=Pr(e t|s t,Y t1)Pr(s t|Y t1)Pr(e t,s t|Y t1)ds t. Once this probability is computed, we can perform another iteration of the recursion by going back to (eq:bayes).

Calculating the Posterior Distribution

We can calculate the posterior distribution (eq:BayesianPosterior) directly by appealing to the properties of the Normal Distribution. Note that labelCalculatingDistribution(s t e t)|Y t1N[(Gs t1 0),(R R tF FR t Ω ε+FR tF )]. where R t is given by (eq:BayesR). Conditional on e t, the distribution of s t is labelCalculatingPosteriors t|e t,Y t1N[Gs^ t1R tF (Ω ε+FR tF ) 1e t,R tR tF (Ω ε+FR tF ) 1FR t].

To summarize, the posterior distribution of s t was be calculated recursively by first choosing initial values for s 0 and Σ 0. Then at each period t, given the posterior distribution of s t1, with mean s^ t1 and covariance matrix Σ t1 as in (eq:PosteriorLastPeriod), we form a prior for s t with mean Gs^ t1 and variance R t=GΣ t1G +Ω η as in (eq:BayesianPrior), evaluate the likelihood in (eq:BayesianLikelihood) given e t=y tFGs^ t1, and then arrive at the posterior in time t given by (eq:CalculatingPosterior).

Algorithm

Using the theoretical derivation as a guide, we can implement the Kalman Filter as a recursive algorithm. Given initialization values s 0 and Σ 0, at time t,

  1. The posterior distribution at time t1 is Normal with mean s^ t1 and covariance matrix Σ t1.

  2. Form the covariance matrix of the prior distribution, R t=GΣ t1G +Ω η;

  3. Calculate the mean of the posterior, s^ t=Gs^ t1+R tF (Ω ε+FR tF ) 1e t;

  4. Calculate the variance of the posterior: Σ t=R tR tF (Ω ε+FR tF ) 1FR t.