Clicky

Pesendorfer and Schmidt-Dengler (2008)

Asymptotic Least Squares Estimators for Dynamic Games

These notes are based on the following article:

Pesendorfer, Martin and Philipp Schmidt-Dengler (2008). Asymptotic Least Squares Estimators for Dynamic Games. Review of Economic Studies 75, 901–928.

Presentation by Jason Blevins, Duke University Applied Microeconomics Reading Group, June 11, 2008.

Outline

Framework

Equilibrium Characterization

The continuation value net of payoff shocks under a i with beliefs σ i is u i(a i;σ i,θ)= a iσ i(a is)[π i(a i,a i,s)+β sg(a i,a i,s,s)V i(s;σ i)]. It is optimal to choose a i under the beliefs σ i if u i(a i;σ i,θ)+ε i,a iu i(a i;σ i,θ)+ε i,a ia iA i.

Ex ante, in expectation we have p(a is,σ i)=Ψ i(a i,s,σ i;θ)=1{u i(a i;σ i,θ)u i(k;σ i,θ)ε i,kε i,a i,ka i}dF. In matrix notation we have a (NKm s)×1 system p=Ψ(σ;θ).

Equilibrium Properties

In equilibrium, beliefs are consistent and we have the fixed point problem labelfixed pointp=Ψ(p;θ). Thus, finding an equilibrium is a fixed point problem on [0,1] NKm s.

Proposition: In any Markov perfect equilibrium, the probability vector p satisfies \eqref{fixed_point}. Conversely, any p that satisfies \eqref{fixed_point} can be extended to a Markov perfect equilibrium.

Theorem: A Markov perfect equilibrium exists.

We have the same results under symmetric equilibria: existence and necessary and sufficient conditions. Symmetry reduces the number of equations in \eqref{fixed_point} and thus the computational complexity.

Identification

The model is identified if there exists a unique set of model primitives (Π i,,Π N,F,β,g) that generate any particular set of choice and state transition probabilities.

Proposition: Suppose F and β are given. Then at most Km sN parameters can be identified.

There are only Km sN equations in the equilibrium conditions but m am sN parameters. We need at least (m am sKm s)N restrictions in order to identify all parameters.

Identification: A Linear Representation

There is some ε¯ i a i(s) that makes player i indifferent between actions a i and 0: a iA ip(a is)[π i(a i,a i,s)+β sSg(a i,a i,s,s)V i(s;p)]+ε¯ i a i(s) = a iA ip(a is)[π i(a i,0,s)+β sSg(a i,0,s,s)V i(s;p)]

From before, V i(σ i)=[Iβσ iG] 1[σ iΠ i+D i(σ i)]. Thus, we have a linear system of equations for player i: X i(p,g,β)Π i+Y i(p,g,β)=0 where X i is a (Km s)×(m am s) matrix and Y i is a (Km s)×1 vector, both of which depend on the choice probabilities, transition probabilities, and β.

Identification: Linear Restrictions

Consider player i. Let R i be a (m am sKm s)×(m am s) matrix of restrictions and let r i be a (m am sKm s)×1-dimensional vector such that R iΠ i=r i.

We can now form an augmented linear system of m am s equations in m am s unknowns (hence the order condition is satisfied): [X i R i]Π i+[Y i r i]=X¯ iΠ i+Y¯ i=0.

Proposition: Consider any player i and suppose that F and β are given. If rank(X¯ i)=m am s, then Π i is exactly identified.

Example: Consider the following restrictions: π i(a i,a i,s i,s i) =π i(a i,a i,s i,s i) aA,(s i,s i)S,(s i,s i)S π i(0,a i,s i) =r i(a i,s i) a iA i,s iS i The first is an exclusion restriction while the second is an exogeneity restriction (e.g., payoffs for inactive firms are known to be zero). If LK+1, then these restrictions ensure identification (provided that the rank condition holds).

Asymptotic Least Squares Estimators

Let θ=(θ π,θ F,β,θ g)Θ q be the parameters of interest.

There are also H(NKm s)+(m am sm s) auxiliary parameters p(θ) and g(θ), related to θ through the NKm s equations labelestimating equationsh(p,g,θ)=pΨ(p,g,θ)=0.

Asymptotic least squares estimators (Gourieroux and Monfort, 1995, Section 9.1) proceed in two steps:

  1. Estimate the auxiliary parameters p and g.
  2. Estimate the parameters of interest using weighted least squares using \eqref{estimating_equations} as estimating equations.

Asymptotic Least Squares Estimators

Assume that consistent and asymptotically normal estimators of p and g are available such that as T, (p^ T,g^ T)(p(θ 0),g(θ 0))a.s., T[(p^ T,g^ T)(p(θ 0),g(θ 0))]dNormal(0,Σ(θ 0)).

The estimation principle involves choosing θ in order to satisfy the constraints h(p^ T,g^ T,θ)=p^ TΨ(p^ T,g^ T,θ)=0.

Let W T be a symmetric positive-definite weight matrix of dimension (NKm s)×(N×K×m s). The asymptotic least squares estimator corresponding to W T is defined as θ˜ T(W T)=argmin θ[p^ TΨ(p^ T,g^ T,θ)] W T[p^ TΨ(p^ T,g^ T,θ)].

Asymptotic Least Squares Estimators: Assumptions

  1. Θ is a compact set.
  2. θ 0 lies in the interior of Θ.
  3. As T, W TW 0 a.s. where W 0 is a non-stochastic positive definite matrix.
  4. θ satisfies [p(θ 0)Ψ(p(θ 0),g(θ 0),θ)] W o[p(θ 0)Ψ(p(θ 0),g(θ 0),θ)]=0 implies that θ=θ 0.
  5. The functions π, g, and F are twice continuously differentiable in θ.
  6. The matrix [ θΨ(p(θ 0),g(θ 0),θ 0)] W o[ θΨ(p(θ 0),g(θ 0),θ 0)] is nonsingular.

Asymptotic Least Squares Estimators: Properties

Proposition: Given the assumptions above the asymptotic least squares estimator θ˜ T(W T) exists, θ˜ T(W T)a.s.θ 0, and as T0, T(θ˜ T(W T)θ 0)dNormal(0,Ω(θ 0)) where Ω(θ 0)=( θΨ W 0 θ ) 1 θΨ W 0[(I 0) (p,g) Ψ]Σ[(I 0) (p,g) Ψ] W 0 θ Ψ( θΨ W 0 θ ) 1 where 0 is the (NKm s)×(m am sm s) zero matrix and the various matrices are evaluated at θ 0, p(θ 0), and g(θ 0).

Efficient Asymptotic Least Squares

Proposition: Under the maintained assumptions, the best asymptotic least squares estimators exist. They correspond to sequences of matrices W T converging to W 0 =([(I 0) (p,g)Ψ]Σ[(I 0) (p,g)Ψ] ) 1. Their asymptotic covariance matrices are ( θΨ ([(I 0) (p,g)Ψ]Σ[(I 0) (p,g)Ψ] ) 1 θ Ψ) 1

Here, 0 denotes a (NKm s)×(m am sm s) matrix of zeros.

Asymptotic Least Squares: Moment Estimator

The moment estimator proposed by Hotz and Miller (1993) is an asymptotic least squares estimator with a particular weight matrix.

Let T is denote the set of observations for individual i in state s and let α is=(α 1,,α K) be a vector of indicators for each choice (with zero omitted).

The moment condition is E[Z(α isΨ is(p^ T,g^ T,θ))]=0 where Z is a J×1-dimensional vector of instruments.

Suppose Z t=Z is. Then the corresponding sample analog becomes 1NT sS1iN tT isZ t(α tΨ is(p^ T,g^ T,θ))=1NT sS1iNn is[Z is(p^ isΨ is(p^ T,g^ T,θ))].

Thus, the moment estimator in this case is an asymptotic least squares estimator with estimating equation p^Ψ(p^ T,g^ T,θ)=0.

Asymptotic Least Squares: Pseudo Maximum Likelihood

The pseudo maximum likelihood estimator of Aguirregabiria and Mira (2002, 2007) is also an asymptotic least squares estimator.

The partial pseudo log-likelihood, conditional on estimates g^ T is = sS i=1 N kA in kislnΨ kis(p^ T,g^ T,θ).

The first order condition is θ=( θΨ )Σ p 1(Ψ)[p^Ψ(p^ T,g^ T,θ)] where Σ p 1(Ψ) is the inverse covariance matrix of the choice probabilities.

This is equivalent to the first order condition of the asymptotic least squares estimator with weight matrix W T mlpΣ p 1.

Monte Carlo Study