# Auxiliary Particle Filter

These notes are based on the following articles:

## Introduction

Consider a time series ${y}_{t}$ for $t=1,\dots ,n$ that is independent conditional on an unobserved state ${\alpha }_{t}$ which is assumed to be Markov process. We wish to perform on-line filtering to learn about the unobserved state given the currently available information by estimating the density $f\left({\alpha }_{t}|{y}_{1},\dots ,{y}_{t}\right)=f\left({\alpha }_{t}|{Y}_{t}\right)$ for $t=1,\dots ,n.$ The measurement density $f\left({y}_{t}|{\alpha }_{t}\right)$ and transition density $f\left({\alpha }_{t+1}|{\alpha }_{t}\right)$ implicitly depend on a finite vector of parameters. The initial distribution of the state is $f\left({\alpha }_{0}\right)$.

Suppose we know the filtering distribution $f\left({\alpha }_{t}|{Y}_{t}\right)$ at time $t$ and we receive a new observation for period $t+1$. We can obtain the updated filtering density in two steps. First, we use the transition density to obtain $f\left({\alpha }_{t+1}|{Y}_{t}\right)$ from $f\left({\alpha }_{t}|{Y}_{t}\right)$ as $f\left({\alpha }_{t+1}|{Y}_{t}\right)=\int f\left({\alpha }_{t+1}|{\alpha }_{t}\right)\mathrm{dF}\left({\alpha }_{t}|{Y}_{t}\right).$ Then, we obtain the new filtering density $f\left({\alpha }_{t+1}|{Y}_{t+1}\right)$ by using Bayes’ Theorem: $f\left({\alpha }_{t+1}|{Y}_{t+1}\right)=\frac{f\left({y}_{t+1}|{\alpha }_{t+1}\right)f\left({\alpha }_{t+1}|{Y}_{t}\right)}{\int f\left({y}_{t+1}|{\alpha }_{t+1}\right)\mathrm{dF}\left({\alpha }_{t+1}|{Y}_{t}\right)}.$

Hence, filtering essentially involves applying the recursive relationship

(1)$f\left({\alpha }_{t+1}|{Y}_{t+1}\right)\propto f\left({y}_{t+1}|{\alpha }_{t+1}\right)\int f\left({\alpha }_{t+1}|{\alpha }_{t}\right)\mathrm{dF}\left({\alpha }_{t}|{Y}_{t}\right).$

If the support of ${\alpha }_{t+1}|{\alpha }_{t}$ is known and finite, then the above integral is simply the weighted sum over the points in the support. In other cases, numerical methods might need to be used.

## Particle Filters

Particle filters are a class of simulation-based filters that recursively approximate the distribution of ${\alpha }_{t}|{Y}_{t}$ using a collection of particles ${\alpha }_{t}^{1},\dots ,{\alpha }_{t}^{M}$ with probability masses ${\pi }_{t}^{1},\dots ,{\pi }_{t}^{M}$. The particles are thought of as a sample from $f\left({\alpha }_{t}|{Y}_{t}\right)$. In this article, the weights are taken to be equal: ${\pi }_{t}^{1}=\cdots ={\pi }_{t}^{M}=1/M$ for all $t$. As $M\to \infty$, we want the approximation to become better. Thus, we can approximate the true filtering density (1) by an empirical one:

(2)$\stackrel{^}{f}\left({\alpha }_{t+1}|{Y}_{t+1}\right)\propto f\left({y}_{t+1}|{\alpha }_{t+1}\right)\sum _{j=1}^{M}f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right).$

Then, a new sample of particles ${\alpha }_{t+1}^{1},\dots ,{\alpha }_{t+1}^{M}$ can be generated from this empirical density and the procedure can continue recursively. A particle filter is said to be fully adapted if it generates independent and identically distributed samples from (2). It is useful to think of (2) as a posterior density which is the product of a prior, ${\sum }_{j=1}^{M}f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right)$, and a likelihood $f\left({y}_{t+1}|{\alpha }_{t+1}\right)$.

Assuming that we can evaluate $f\left({y}_{t+1}|{\alpha }_{t+1}\right)$ up to a constant of proportionality, we can sample from (2) by first obtaining a draw ${\alpha }_{t}^{j}$ with probability $1/M$ and then drawing from $f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right)$. The authors describe three of the possible methods for doing this. The most commonly used is the Sampling/importance resampling (SIR) method of Rubin (1987). The first particle filter, independently proposed by several authors, was based on SIR. In particular, Gordon, Salmond, and Smith (1993) suggested it for non-Gaussian, nonlinear state space models and Kitagawa (1996) for time series models. The other two methods, acceptance sampling and MCMC methods, are discussed in the article but not in these notes.

### Sampling/importance resampling (SIR)

Given a set of draws ${\alpha }_{t}^{1},\dots ,{\alpha }_{t}^{M}$, the SIR method first takes draws ${\alpha }_{t+1}^{1},\dots ,{\alpha }_{t+1}^{R}$ from $f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right)$ and assigns a weight ${\pi }_{t+1}^{j}$ to each draw, where

(3)${\pi }_{t+1}^{j}=\frac{{w}_{j}}{\sum _{i=1}^{R}{w}_{i}}$

and ${w}_{j}=f\left({y}_{t+1}|{\alpha }_{t+1}^{j}\right)$. This weighted sample converges to a nonrandom sample from the empirical filtering distribution as $R\to \infty$. To generate a random sample of size $M$, a resampling step is introduced where the draws ${\alpha }_{t+1}^{1},\dots ,{\alpha }_{t+1}^{R}$ are resampled with weights ${\pi }_{t+1}^{1},\dots ,{\pi }_{t+1}^{R}$ to produce a uniformly weighted sample.

Basically, the SIR particle filter above produces proposal draws of ${\alpha }_{t+1}$ without taking into account the new information, the value of ${y}_{t+1}$. A particle filter is said to be adapted if it makes proposal draws taking into account this new information. An adapted version of the algorithm would look something like

1. Draw ${\alpha }_{t+1}^{r}\sim g\left({\alpha }_{t+1}|{y}_{t+1}\right)$ for $r=1,\dots ,R$.

2. Evaluate the weights

(4)${w}_{t+1}^{r}=\frac{f\left({y}_{t+1}|{\alpha }_{t+1}^{r}\right)\sum _{j=1}^{M}f\left({\alpha }_{t+1}^{r}|{\alpha }_{t}^{j}\right)}{g\left({\alpha }_{t+1}^{r}|{y}_{t+1}\right)}.$
1. Resample with weights proportional to ${w}_{t+1}^{r}$ to obtain a sample of size $M$.

This algorithm allows for proposals to come from a general density $g\left({\alpha }_{t+1}|{y}_{t+1}\right)$ which depends on ${y}_{t+1}$ as opposed to the standard SIR particle filter where the proposal density does not depend on ${y}_{t+1}$. To understand how the importance weights above were derived, consider the importance sampler of $f\left({\alpha }_{t+1}|{y}_{t+1}\right)$ with the importance sampling density $g\left({\alpha }_{t+1}|{y}_{t+1}\right)$. We would first take draws from $g\left({\alpha }_{t+1}|{y}_{t+1}\right)$ and then weight by $f\left({\alpha }_{t+1}|{y}_{t+1}\right)/g\left({\alpha }_{t+1}|{y}_{t+1}\right)$. But from Bayes’ Theorem,

(5)$f\left({\alpha }_{t+1}|{Y}_{t+1}\right)\propto f\left({y}_{t+1}|{\alpha }_{t+1}\right)\int f\left({\alpha }_{t+1}|{\alpha }_{t}\right)\mathrm{dF}\left({\alpha }_{t}|{Y}_{t}\right)\approx f\left({y}_{t+1}|{\alpha }_{t+1}\right)\sum _{j=1}^{M}f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right).$

Hence, after dividing by $g\left({\alpha }_{t+1}|{y}_{t+1}\right)$, we have the importance weights shown above.

This illustrates the difficulty of adapting the standard particle filter. To obtain a single new particle we must evaluate $M+1$ densities: $f\left({y}_{t+1}|{\alpha }_{t+1}\right)$ as well as $f\left({\alpha }_{t+1}|{\alpha }_{t}^{j}\right)$ for each $j=1,\dots ,M$.

## Auxiliary Particle Filters

The authors extend standard particle filtering methods by including an auxiliary variable which allows the particle filter to be adapted in a more efficient way. They introduce a variable, $k$, which is an index to the mixture (2) and filter in a higher dimension. This auxiliary variable is introduced only to aid in simulation. With this additional variable, the filtering density we wish to approximate becomes

(6)$f\left({\alpha }_{t+1},k|{Y}_{t+1}\right)\propto f\left({y}_{t+1}|{\alpha }_{t+1}\right)f\left({\alpha }_{t+1}|{\alpha }_{t}^{k}\right)$

for $k=1,\dots ,M$. Now, if we can sample from $f\left({\alpha }_{t+1},k|{Y}_{t+1}\right)$, then we can discard the sampled values of $k$ and be left with a sample from the original filtering density (2).

To sample from (6) using SIR, we make $R$ proposal draws $\left({\alpha }_{t+1}^{j},{k}^{j}\right)$ from some proposal density $g\left({\alpha }_{t+1},k|{Y}_{t+1}\right)$ and calculate the weights

(7)${w}_{j}=\frac{f\left({y}_{t+1}|{\alpha }_{t+1}^{j}\right)f\left({\alpha }_{t+1}^{j}|{\alpha }_{t}^{{k}^{j}}\right)}{g\left({\alpha }_{t+1}^{j},{k}^{j}|{Y}_{t+1}\right)}$

for $j=1,\dots ,R$.

The choice of $g$ is left completely to the researcher. The authors propose a generic choice of $g$ which can be applied in many situations and go on to provide more examples in specific models where the structure of the model informs the choice of $g$. Here, I present only the generic $g$ in terms of the SIR algorithm. The density (6) can be approximated by

(8)$g\left({\alpha }_{t+1},k|{Y}_{t+1}\right)\propto f\left({y}_{t+1}|{\mu }_{t+1}^{k}\right)f\left({\alpha }_{t+1}|{\alpha }_{t}^{k}\right)$

where ${\mu }_{t+1}^{k}$ is some value with a high probability of occurance, for example, the mean or mode of the distribution of ${\alpha }_{t+1}|{\alpha }_{t}^{k}$. This choice is made for convenience since $g\left(k|{Y}_{t+1}\right)\propto \int f\left({y}_{t+1}|{\mu }_{t+1}^{k}\right)\mathrm{dF}\left({\alpha }_{t+1}|{\alpha }_{t}^{k}\right)=f\left({y}_{t+1}|{\mu }_{t+1}^{k}\right).$ Hence, we can draw from $g\left({\alpha }_{t+1},k|{Y}_{t+1}\right)$ by first drawing values of $k$ with probabilities ${\lambda }_{k}\propto g\left(k|{Y}_{t+1}\right)$ and then drawing from the transition probabilities $f\left({\alpha }_{t+1}|{\alpha }_{t}^{k}\right)$. The weights ${\lambda }_{k}$ are called first stage weights. Then, after sampling $R$ times from $g\left({\alpha }_{t+1},k|{Y}_{t+1}\right)$ we form the weights

(9)${w}_{r}=\frac{f\left({y}_{t+1}|{\alpha }_{t+1}^{r}\right)}{f\left({y}_{t+1}|{\mu }_{t+1}^{{k}^{r}}\right)}$

for $r=1,\dots ,R$. We could also resample $M$ times from this distribution.

## Auxiliary Particle Filter Algorithm

The following algorithm is based on the generic choice of $g$ from the discussion above. Other choices are possible, and may be more efficient for some model specifications.

1. Initialize the algorithm with a uniformly weighted sample ${\alpha }_{0}^{1},\dots ,{\alpha }_{0}^{M}$ from the distribution $f\left({\alpha }_{0}\right)$.

2. Given draws ${\alpha }_{t}^{1},\dots ,{\alpha }_{t}^{M}$ from $f\left({\alpha }_{t}|{Y}_{t}\right)$, determine ${\mu }_{t+1}^{k}$ and the first stage weights ${\lambda }_{k}\propto f\left({y}_{t+1}|{\mu }_{t+1}^{k}\right)$ for each $k=1,\dots ,M$.

3. For $r=1,\dots ,R$, draw ${k}^{r}$ from the indices $k=1,\dots ,M$ with weights ${\lambda }_{k}$ and then draw ${\alpha }_{t+1}^{r}$ from the transition density $f\left({\alpha }_{t+1}|{\alpha }_{t}^{{k}^{r}}\right).$

4. Form the weights ${w}_{r}$ according to (9).

5. Resample $M$ times from these $R$ draws with weights ${w}_{r}$ if desired.