Clicky

Crisan (2001)

Particle Filters—A Theoretical Perspective.

These notes are based on the article:

Crisan, Dan (2001): “Particle Filters–A Theoretical Perspective,” in Sequential Monte Carlo Methods in Practice, ed. by A. Doucet, N. de Freitas, and N. Gordon. New York: Springer-Verlag.

Introduction

This article provides a formal mathematical treatment of the convergence of particle filters. It presents several theorems which provide necessary and sufficient conditions for each of two types of convergence of the particle filter to the posterior distribution of the signal.

Notation

This chapter essentially deals with random measures on d. Let ( d) denote the σ-algebra of Borel subsets of d, let B( d) denote the set of bounded measurable functions on d, and let 𝒞 b( d) denote the set of bounded continuous functions on d. We employ the sup norm on 𝒞 b( d) such that fsup x d|f(x)| for any f𝒞 b( d).

Let 𝒫( d) denote the set of probability measures over ( d). For any such measure μ𝒫( d) and any measurable function f, we denote the integral of f with respect to μ as μffdμ.

We endow 𝒫( d) with the weak topology. Then for any sequence {μ n} in 𝒫( d), we say that μ n converges weakly to μ𝒫( d) if lim nμ nf=μf,for allf𝒞 b( d). In this case we write lim nμ n=μ or simply μ nμ.

Markov chains

Let (Ω,,P) be a probability space and let let X={X t,t} be a stochastic process on this space. Let t X denote the σ-algebra generated by the process. We say that X is a Markov chain if for every t and A( n x) we have P(X t+1A| t X)=P(X t+1A|X t). Let K t: n x×( n x)[0,1] denote the transition kernel of X, where K t(x,A)=P(X t+1A|X t=x) gives the probability of X t+1 arriving in A given that X t=x.

The Filtering Problem

Let X={X t,t} denote the signal, a Markov process on n x with kernel K t(x,dy), and let Y={Y t,t} denote the observation process, a stochastic process on n y which evolves according to Y th(t,X t)+W t,t>0, with Y 0=0. Here, h:× n x n y is a measurable function that is continuous in on n x for all t and W t:Ω n y are independent random vectors. Let g(t,) denote the density of W t with respect to Lebesgue measure and suppose that g is bounded and continuous.

The filtering problem is to compute the conditional density of the signal X t given a sequence of observations y 0:t={y 0,,y t} on the observation process Y up to time t. We denote this distribution of interest as π t y 0:t(A)P(X tA|Y 0:t=y 0:t) for all A( n x). We will also use the predicted conditional density given y 0:t1, p t y 0:t1(A)P(X tA|Y 0:t1=y 0:t1).

Convergence of random measures

The particle filter essentially constructs a sequence of random measures, or measure-valued random variables, which approximate the true conditional distribution π t y 0:t. This sequence is designed to converge to the true distribution, but we must be more specific about what we mean by convergence here. Crisan outlines two modes of convergence in the chapter, but I will only cover the second mode here: almost sure convergence in the weak topology.

Definition. Let {μ n} be a sequence of random probability measures, μ n:Ω𝒫( d) and let μ𝒫( d) be a deterministic probability measure. We say that the sequence {μ n} converges weakly almost surely to μ if there exists some set E with measure zero such that for each ωE˜, we have lim nμ n(ω)f=μf, for allf𝒞 b( d). We write this as μ nμ P-a.s..

Particle Filters

The class of particle filters described below involves a collection of n particles which evolve according to a specified Markov transition kernel each period and are resampled with replacement according to their importance weights. At each point in time t, the resulting empirical measure π t n is shown to converge to the true conditional distribution π t y 0:t as the number of particles approaches infinity.

Crisan discusses a much more general class of algorithms than the one I describe below, but generally speaking most particle filters use an algorithm similar to the following.

Initialization. The particle filter is initialized with an empirical measure π 0 n which is built from a random sample of size n from π 0: π 0 n=1n i=1 nδ x 0 (i) where x 0 (i)π 0 and δ x denotes the Dirac delta mass located at x.

Note that π 0 n is a random measure because the particles were drawn randomly from π 0. The resulting measure is conditional upon the particular sequence of random deviates used to construct these draws. It is useful to think of ω in this case as a vector of Uniform(0,1) deviates that could be used to generate draws from the distribution of π 0. In subsequent steps, it will also be necessary to generate random draws to transition the particles.

Iteration. This step constructs π t n given π t1 n. We are given π t1 n=1n i=1 nδ x t1 (i). We pass each x t1 (i) through the transition kernel to generate particles x¯ t (i) such that x¯ t (i)K t1(x t1 (i),). These particles move independently of each other. We can now build an approximation to the predicted conditional distribution p t: p t n=1n i=1 nδ x¯ t (i).

Next, for each particle x¯ t (i) we compute the importance weight w t (i)=g(x t (i)) j=1 ng(x t (j)). Now, we resample n particles with replacement in proportion to the importance weights to obtain a sample {x t (1),,x t (n)} in order to construct the desired approximation π t n=1n i=1 nδ x t (i).

Convergence Theorems

Crisan shows that if the random measures π t n are constructed according to the algorithm described above, then for all t, we have π t nπ t y 0:t P-a.s.. Again, Crisan’s results are much more general, but this simple example illustrates the general idea of the results.

Notes

The results presented in this chapter concern only convergence. Rates of convergence are not discussed, but are discussed in Crisan, Del Moral, and Lyons (1999).