Auxiliary Particle Filter

These notes are based on the following articles:

Introduction

Consider a time series y t for t=1,,n that is independent conditional on an unobserved state α t which is assumed to be Markov process. We wish to perform on-line filtering to learn about the unobserved state given the currently available information by estimating the density f(α t|y 1,,y t)=f(α t|Y t) for t=1,,n. The measurement density f(y t|α t) and transition density f(α t+1|α t) implicitly depend on a finite vector of parameters. The initial distribution of the state is f(α 0).

Suppose we know the filtering distribution f(α t|Y t) at time t and we receive a new observation for period t+1. We can obtain the updated filtering density in two steps. First, we use the transition density to obtain f(α t+1|Y t) from f(α t|Y t) as f(α t+1|Y t)=f(α t+1|α t)dF(α t|Y t). Then, we obtain the new filtering density f(α t+1|Y t+1) by using Bayes’ Theorem: f(α t+1|Y t+1)=f(y t+1|α t+1)f(α t+1|Y t)f(y t+1|α t+1)dF(α t+1|Y t).

Hence, filtering essentially involves applying the recursive relationship labelfilteringf(α t+1|Y t+1)f(y t+1|α t+1)f(α t+1|α t)dF(α t|Y t). If the support of α t+1|α t is known and finite, then the above integral is simply the weighted sum over the points in the support. In other cases, numerical methods might need to be used.

Particle Filters

Particle filters are a class of simulation-based filters that recursively approximate the distribution of α t|Y t using a collection of particles α t 1,,α t M with probability masses π t 1,,π t M. The particles are thought of as a sample from f(α t|Y t). In this article, the weights are taken to be equal: π t 1==π t M=1/M for all t. As M, we want the approximation to become better. Thus, we can approximate the true filtering density (eq:filtering) by an empirical one: labelempirical filteringf^(α t+1|Y t+1)f(y t+1|α t+1) j=1 Mf(α t+1|α t j). Then, a new sample of particles α t+1 1,,α t+1 M can be generated from this empirical density and the procedure can continue recursively. A particle filter is said to be fully adapted if it generates independent and identically distributed samples from (eq:empirical_filtering). It is useful to think of (eq:empirical_filtering) as a posterior density which is the product of a prior, j=1 Mf(α t+1|α t j), and a likelihood f(y t+1|α t+1).

Assuming that we can evaluate f(y t+1|α t+1) up to a constant of proportionality, we can sample from (eq:empirical_filtering) by first obtaining a draw α t j with probability 1/M and then drawing from f(α t+1|α t j). The authors describe three of the possible methods for doing this. The most commonly used is the Sampling/importance resampling (SIR) method of Rubin (1987). The first particle filter, independently proposed by several authors, was based on SIR. In particular, Gordon, Salmond, and Smith (1993) suggested it for non-Gaussian, nonlinear state space models and Kitagawa (1996) for time series models. The other two methods, acceptance sampling and MCMC methods, are discussed in the article but not in these notes.

Sampling/importance resampling (SIR)

Given a set of draws α t 1,,α t M, the SIR method first takes draws α t+1 1,,α t+1 R from f(α t+1|α t j) and assigns a weight π t+1 j to each draw, where π t+1 j=w j i=1 Rw i and w j=f(y t+1|α t+1 j). This weighted sample converges to a nonrandom sample from the empirical filtering distribution as R. To generate a random sample of size M, a resampling step is introduced where the draws α t+1 1,,α t+1 R are resampled with weights π t+1 1,,π t+1 R to produce a uniformly weighted sample.

Adaption

Basically, the SIR particle filter above produces proposal draws of α t+1 without taking into account the new information, the value of y t+1. A particle filter is said to be adapted if it makes proposal draws taking into account this new information. An adapted version of the algorithm would look something like

  1. Draw α t+1 rg(α t+1|y t+1) for r=1,,R.

  2. Evaluate the weights w t+1 r=f(y t+1|α t+1 r) j=1 Mf(α t+1 r|α t j)g(α t+1 r|y t+1).

  3. Resample with weights proportional to w t+1 r to obtain a sample of size M.

This algorithm allows for proposals to come from a general density g(α t+1|y t+1) which depends on y t+1 as opposed to the standard SIR particle filter where the proposal density does not depend on y t+1. To understand how the importance weights above were derived, consider the importance sampler of f(α t+1|y t+1) with the importance sampling density g(α t+1|y t+1). We would first take draws from g(α t+1|y t+1) and then weight by f(α t+1|y t+1)/g(α t+1|y t+1). But from Bayes’ Theorem, f(α t+1|Y t+1)f(y t+1|α t+1)f(α t+1|α t)dF(α t|Y t)f(y t+1|α t+1) j=1 Mf(α t+1|α t j). Hence, after dividing by g(α t+1|y t+1), we have the importance weights shown above.

This illustrates the difficulty of adapting the standard particle filter. To obtain a single new particle we must evaluate M+1 densities: f(y t+1|α t+1) as well as f(α t+1|α t j) for each j=1,,M.

Auxiliary Particle Filters

The authors extend standard particle filtering methods by including an auxiliary variable which allows the particle filter to be adapted in a more efficient way. They introduce a variable, k, which is an index to the mixture (eq:empirical_filtering) and filter in a higher dimension. This auxiliary variable is introduced only to aid in simulation. With this additional variable, the filtering density we wish to approximate becomes labelfilter densityauxf(α t+1,k|Y t+1)f(y t+1|α t+1)f(α t+1|α t k) for k=1,,M. Now, if we can sample from f(α t+1,k|Y t+1), then we can discard the sampled values of k and be left with a sample from the original filtering density (eq:empirical_filtering).

To sample from (eq:filter_density_aux) using SIR, we make R proposal draws (α t+1 j,k j) from some proposal density g(α t+1,k|Y t+1) and calculate the weights labelaux weightsw j=f(y t+1|α t+1 j)f(α t+1 j|α t k j)g(α t+1 j,k j|Y t+1) for j=1,,R.

The choice of g is left completely to the researcher. The authors propose a generic choice of g which can be applied in many situations and go on to provide more examples in specific models where the structure of the model informs the choice of g. Here, I present only the generic g in terms of the SIR algorithm. The density (eq:filter_density_aux) can be approximated by g(α t+1,k|Y t+1)f(y t+1|μ t+1 k)f(α t+1|α t k) where μ t+1 k is some value with a high probability of occurance, for example, the mean or mode of the distribution of α t+1|α t k. This choice is made for convenience since g(k|Y t+1)f(y t+1|μ t+1 k)dF(α t+1|α t k)=f(y t+1|μ t+1 k). Hence, we can draw from g(α t+1,k|Y t+1) by first drawing values of k with probabilities λ kg(k|Y t+1) and then drawing from the transition probabilities f(α t+1|α t k). The weights λ k are called first stage weights. Then, after sampling R times from g(α t+1,k|Y t+1) we form the weights labelaux weightsw r=f(y t+1|α t+1 r)f(y t+1|μ t+1 k r) for r=1,,R. We could also resample M times from this distribution.

Auxiliary Particle Filter Algorithm

The following algorithm is based on the generic choice of g from the discussion above. Other choices are possible, and may be more efficient for some model specifications.

  1. Initialize the algorithm with a uniformly weighted sample α 0 1,,α 0 M from the distribution f(α 0).

  2. Given draws α t 1,,α t M from f(α t|Y t), determine μ t+1 k and the first stage weights λ kf(y t+1|μ t+1 k) for each k=1,,M.

  3. For r=1,,R, draw k r from the indices k=1,,M with weights λ k and then draw α t+1 r from the transition density f(α t+1|α t k r).

  4. Form the weights w r according to (eq:aux_weights).

  5. Resample M times from these R draws with weights w r if desired.