Pesendorfer and Schmidt-Dengler (2008)
These notes are based on the following article:
Pesendorfer, Martin and Philipp Schmidt-Dengler (2008). Asymptotic Least Squares Estimators for Dynamic Games. Review of Economic Studies 75, 901–928.
Presentation by Jason Blevins, Duke University Applied Microeconomics Reading Group, June 11, 2008.
Outline
- Considers the class of asymptotic least squares estimators for dynamic games.
- Estimation is based on equilibrium conditions.
- Discuss identification and provide sufficient conditions for exact identification.
- Characterize the efficient asymptotic least squares estimator.
- Several well-known estimators are members of this class.
- Monte Carlo experiments.
Framework
- Dynamic games in discrete time with .
- players, actions, states per player, common discount factor .
- States:
- on .
- Let .
- The payoff shocks are private information, independent across players and time, and independent of the actions of other players.
- Actions are made simultaneously. Let .
- State transitions follow some density . Let denote the matrix of these probabilities where and .
- Period payoffs are given by
Equilibrium Characterization
The continuation value net of payoff shocks under with beliefs is It is optimal to choose under the beliefs if
Ex ante, in expectation we have In matrix notation we have a system
Equilibrium Properties
In equilibrium, beliefs are consistent and we have the fixed point problem Thus, finding an equilibrium is a fixed point problem on .
Proposition: In any Markov perfect equilibrium, the probability vector satisfies \eqref{fixed_point}. Conversely, any that satisfies \eqref{fixed_point} can be extended to a Markov perfect equilibrium.
Theorem: A Markov perfect equilibrium exists.
We have the same results under symmetric equilibria: existence and necessary and sufficient conditions. Symmetry reduces the number of equations in \eqref{fixed_point} and thus the computational complexity.
Identification
The model is identified if there exists a unique set of model primitives that generate any particular set of choice and state transition probabilities.
- Time series data .
- Suppose the data allow us to characterize and .
- Fix and .
- There are remaining unknowns in .
Proposition: Suppose and are given. Then at most parameters can be identified.
There are only equations in the equilibrium conditions but parameters. We need at least restrictions in order to identify all parameters.
Identification: A Linear Representation
There is some that makes player indifferent between actions and :
From before, . Thus, we have a linear system of equations for player : where is a matrix and is a vector, both of which depend on the choice probabilities, transition probabilities, and .
Identification: Linear Restrictions
Consider player . Let be a matrix of restrictions and let be a -dimensional vector such that .
We can now form an augmented linear system of equations in unknowns (hence the order condition is satisfied):
Proposition: Consider any player and suppose that and are given. If , then is exactly identified.
Example: Consider the following restrictions: The first is an exclusion restriction while the second is an exogeneity restriction (e.g., payoffs for inactive firms are known to be zero). If , then these restrictions ensure identification (provided that the rank condition holds).
Asymptotic Least Squares Estimators
Let be the parameters of interest.
There are also auxiliary parameters and , related to through the equations
Asymptotic least squares estimators (Gourieroux and Monfort, 1995, Section 9.1) proceed in two steps:
- Estimate the auxiliary parameters and .
- Estimate the parameters of interest using weighted least squares using \eqref{estimating_equations} as estimating equations.
Asymptotic Least Squares Estimators
Assume that consistent and asymptotically normal estimators of and are available such that as ,
The estimation principle involves choosing in order to satisfy the constraints
Let be a symmetric positive-definite weight matrix of dimension . The asymptotic least squares estimator corresponding to is defined as
Asymptotic Least Squares Estimators: Assumptions
- is a compact set.
- lies in the interior of .
- As , a.s. where is a non-stochastic positive definite matrix.
- satisfies implies that .
- The functions , , and are twice continuously differentiable in .
- The matrix is nonsingular.
Asymptotic Least Squares Estimators: Properties
Proposition: Given the assumptions above the asymptotic least squares estimator exists, , and as , where where is the zero matrix and the various matrices are evaluated at , , and .
Efficient Asymptotic Least Squares
Proposition: Under the maintained assumptions, the best asymptotic least squares estimators exist. They correspond to sequences of matrices converging to Their asymptotic covariance matrices are
Here, denotes a matrix of zeros.
Asymptotic Least Squares: Moment Estimator
The moment estimator proposed by Hotz and Miller (1993) is an asymptotic least squares estimator with a particular weight matrix.
Let denote the set of observations for individual in state and let be a vector of indicators for each choice (with zero omitted).
The moment condition is where is a -dimensional vector of instruments.
Suppose . Then the corresponding sample analog becomes
Thus, the moment estimator in this case is an asymptotic least squares estimator with estimating equation .
Asymptotic Least Squares: Pseudo Maximum Likelihood
The pseudo maximum likelihood estimator of Aguirregabiria and Mira (2002, 2007) is also an asymptotic least squares estimator.
The partial pseudo log-likelihood, conditional on estimates is
The first order condition is where is the inverse covariance matrix of the choice probabilities.
This is equivalent to the first order condition of the asymptotic least squares estimator with weight matrix .
Monte Carlo Study
- Compare LS-E, PML, LS-I, and k-PML.
- A simple two player, two action, two state, game with five equilibria.
- Three equilibria are used for experiments with various sample sizes.
- LS-E estimator performs best overall (in eight of 12 experiments).
- LS-E performs poorly with the smallest sample size ().
- PML ranks second (by MSE) in seven of 12 specifications.
- PML performs better than LS-E for and worse for larger sample sizes. This may be because the covariance matrix of is estimated better than the efficient weight matrix for small .
- PML may be less computationally burdensome for large state spaces.