Leveraging Uniformization and Sparsity for Computation of Continuous Time Dynamic Discrete Choice Games

Jason R. Blevins.
The Ohio State University, Department of Economics
Working Paper.


Abstract. Continuous time formulations of dynamic discrete choice games are recognized for their computational advantages, for example, in representing dynamic strategic interactions in oligopolistic markets. In addition to these baseline advantages, this paper investigates additional computational challenges and opportunities. We first establish a new representation of the value function in the model based on uniformization—a technique used in the analysis of continuous time Markov chains—and draw an analogy to discrete time models. This representation allows us to establish an upper bound on the rate of convergence of value iteration, as well as relative value function iteration following Bray (2019). We also consider Newton-Kantorovich iteration following Rust (1987). We show that uniformization also leads to a stable method to compute the matrix exponential, an operator involved in the model’s log likelihood function when only discrete time “snapshot” data are available. We develop a new algorithm for simultaneous computation of the matrix exponential and its first derivatives with respect to the structural parameters of the model. The algorithm also leverages the inherent sparsity of the model’s intensity matrix. We consider the use of sparse matrix methods and precomputed addresses to accelerate the computation of the value function, the matrix exponential, and its derivatives. Combined, these strategies yield significant reductions in computational times, facilitating the exploration of more sophisticated and realistic structural models in continuous time dynamic discrete choice games.

Keywords: Continuous time, Markov decision processes, dynamic discrete choice, dynamic stochastic games, uniformization, matrix exponential, sparse matrices, computational methods, numerical methods.

JEL Classification: C63, C73, L13.