# Maximum Simulated Likelihood

Given a sample of observations $\{{y}_{i}:i=1,\dots ,N\}$, the log-likelihood function for an unknown parameter $\theta $ is

Let $\tilde{f}(\theta |{y}_{i},\omega )$ be an unbiased simulator such that

where $\omega $ is a vector of $R$ simulated random variates. Then, the maximum simulated likelihood (MSL) estimator for $\theta $ is

where ${\tilde{l}}_{N}(\theta )\equiv {\sum}_{i=1}^{N}\mathrm{ln}\tilde{f}(\theta |{y}_{i},\omega )$ for some sequence of simulations $\{{\omega}_{i}\}$.

There are two points which deserve special attention. First, the estimator is conditional upon the particular sequence of simulations $\{{\omega}_{i}\}$ used. That is to say one will obtain a different estimate for each such sequence used. Second, even though the simulator of $f$ is unbiased, the resulting MSL estimate will be biased. That is, even though we have

this does *not* imply

Unbiased simulation of the log-likelihood function is generally infeasible due to the nonlinearity introduced by the natural log transformation of the likelihood function, which can usually be simulated without bias.

## Consistency

All is not lost because, even though our estimate is biased, we can still obtain an estimator whose probability limit is the same as the MLE. This requires that the sample average of the simulated log-likelihood converges to the sample average log-likelihood. This can be accomplished by increasing the number of simulations, and thus decreasing the simulation error, at a sufficiently fast rate relative to the sample size. We have the following lemma (see Newey and McFadden, 1994):

**Lemma.** Suppose the following:

- $\theta \in \Theta \subset {\mathbb{R}}^{K}$ and $\Theta $ is compact,
- ${Q}_{0}(\theta )$ and ${Q}_{N}(\theta )$ are continuous in $\theta $,
- ${\theta}_{0}\equiv \mathrm{arg}{\mathrm{max}}_{\theta \in \Theta}{Q}_{0}(\theta )$ is unique,
- ${\hat{\theta}}_{N}\equiv \mathrm{arg}{\mathrm{max}}_{\theta \in \Theta}{Q}_{N}(\theta )$, and
- ${Q}_{N}(\theta )\to {Q}_{0}(\theta )$ in probability uniformly in $\theta $ as $N\to \mathrm{\infty}$.

Then, ${\hat{\theta}}_{N}\to {\theta}_{0}$ in probability.

Now, suppose that $f$ satisfies the conditions of this lemma. In particular,
suppose that the obersvations ${y}_{i}$ are *iid*, that $\theta $ is identified,
and that $f(\theta ,y)$ is continuous in $\theta $ over some compact set
$\Theta $. Finally, assume that
$E[{\mathrm{sup}}_{\theta \in \Theta}|\mathrm{ln}f(\theta ,y)|]$ is finite.

Now, given a sequence of simulators ${\omega}_{\mathrm{ir}}$, *iid* across $r$, the
the MSL estimator defined as

is consistent if $R\to \mathrm{\infty}$ as $N\to \mathrm{\infty}$. For a proof refer to Hajivassiliou and Ruud (1994, p. 2417).

## Asymptotic Normality

Suppose that $\tilde{f}$ is differentiable in $\theta $. Then we can form a Taylor expansion approximation of ${\Delta}_{\theta}\tilde{l}(\theta )$ around ${\theta}_{0}$:

for some $\overline{\theta}$ lying on the line segment between ${\hat{\theta}}_{\text{MSL}}$ and ${\theta}_{0}$. By definition, the left hand side equals zero and after multiplying by $\sqrt{N}$ and rearranging we find

Now, the consistency of ${\hat{\theta}}_{\text{MSL}}$ implies consistency of $\overline{\theta}$ and so

As for the gradient term, we have

Ideally, to prove asymptotic normallity we would like this to converge to some mean zero normal distribution. However, the expectation of the individual terms in this summation are nonzero, so we cannot apply a central limit theorem directly. We can rewrite this term as follows:

with

and

The term ${A}_{N}$ represents the pure simulation noise and has expectation zero. The ${B}_{N}$ term represents the simulation bias. Proposition 4 of Hajivassiliou and Ruud (1994, p. 2418) shows that if $R$ grows fast enough relative to $N$, specifically if $R/\sqrt{N}\to \mathrm{\infty}$, then the simulation bias is harmless. Finally, Proposition 5 (p. 2419) shows that ${\hat{\theta}}_{\text{MSL}}$ is in fact asymptotically efficient.

## References

Hajivassiliou, V. A. and P. A. Ruud (1994). Classical Estimation Methods for LDV Models Using Simulation, in R.F. Engle and D.L. McFadden, eds.,

*Handbook of Econometrics*, volume 4. Amsterdam: Elsevier.Lee, L.-F. (1992). On efficiency of methods of simulated moments and maximum simulated likelihood estimation of discrete response models.

*Econometric Theory*8, 518–552.Gouriéroux, C. and A. Monfort (1991). Simulation Based Inference in Models with Heterogeneity,

*Annales d’Économie et de Statistique*20/21, 69–107.Newey, W. K. and D. McFadden (1994). Large Sample Estimation and Hypothesis Testing, in R. F. Engle and D. L. McFadden, eds.,

*Handbook of Econometrics*, volume 4. Amsterdam: Elsevier.