# Heckman and Honoré (1989)

These notes are based on the following article:

Heckman, James J. and Bo E. Honoré (1989). The identifiability of the competing risks model. Biometrika, 76: 325–330.

## The Classical Competing Risks Model

Suppose there are $J$ competing causes of death $\{1,2,\dots ,J\}$.

Associated with each cause of death is a stochastic failure time ${T}_{j}$.

We observe only the distribution of the

*identified minimum*:The time of death $T={\mathrm{min}}_{j}{T}_{j}$.

The cause of death $I=\mathrm{arg}{\mathrm{min}}_{j}{T}_{j}$.

Goal: Identify the joint distribution of the latent failure times given that we only observe the distribution of the identified minimum.

Note that we aren’t considering regressors yet.

## Cox and Tsiatis Nonidentification Theorem

For any joint distribution of latent failure times, there exists another such distribution with

*independent failure times*that yields the same distribution of the minimum (Cox, 1959, 1962; Tsiatis, 1975).That is, given

*r.v.*’s $({T}_{1},{T}_{2},\dots ,{T}_{J})$ there exist $({S}_{1},{S}_{2},\dots ,{S}_{J})$ with ${S}_{i}\u2aeb{S}_{j}$ for all $i\ne j$ such that $(T,{I}_{T})$ and $(S,{I}_{S})$ are observationally equivalent.In light of this result, any empirical work needed to proceed by placing some structure on the form of dependence across risks, for example, by assuming independence.

## Importance of Dependence

We are concerned with

*conditional independence*—independence of the risks ${T}_{1},\dots ,{T}_{J}$ conditional on $X$.Even conditional independence may not hold if, for example, we are studying an individual whose behavior may affect all of the risks.

Yashin, Manton, and Stallard (1986): How do smoking, blood pressure, and body weight (regressors) affect time of death from cancer, heart disease, etc. (risks).

## Overview

Establish an identification theorem for a general class of competing risks models with regressors.

This class includes models with marginal distributions that follow:

Proportional hazards.

Mixed proportional hazards.

Accelerated hazards.

Results are presented for only two competing risks but generalize to any arbitrary finite number of risks.

## Proportional Hazards Model

We want to model the time of death $T$ from a

*single risk*conditional on some covariates $X$.Conditional on $X$, $T$ has cdf $F(t|x)$ and pdf $f(t|x)$.

Hazard function: $\lambda (t|x)=\frac{f(t|x)}{1-F(t|x)}$.

Integrated hazard: $\Lambda (t|x)={\int}_{0}^{t}\lambda (s|x)\phantom{\rule{thickmathspace}{0ex}}\mathrm{ds}$.

If $\lambda (t|x)=z(t)\varphi (x)$ then $\Lambda (t|x)=Z(t)\varphi (x)$ with $Z(t)={\int}_{0}^{t}z(s)\phantom{\rule{thickmathspace}{0ex}}\mathrm{ds}$.

Equivalently, we can work with the

**survivor function**: $$S(t|x)=\mathrm{Pr}(T>t|x)=\mathrm{exp}[-Z(t)\varphi (x)]$$It is common in practice to use $\varphi (x)={e}^{x\beta}$.

Suppose $F(t|x)=1-{e}^{-Z(t)\varphi (x)}$ where $Z(t)$ is the baseline integrated hazard and $\varphi (x)$ is a scaling term.

If $Z$ is differentiable, then $Z\prime (t)$ is the baseline hazard.

## Proportional Hazards and Competing Risks

Assuming for the moment that failure times are independent, we can easily generalize this to model competing risks.

The distribution of each failure time has a proportional hazard specification.

$Z(t)$ and $\varphi $ may differ across risks.

The joint survivor function is $$S({t}_{1},{t}_{2}|x)=1-(1-\mathrm{exp}[-{Z}_{1}({t}_{1}){\varphi}_{1}(x)])(1-\mathrm{exp}[-{Z}_{2}({t}_{2}){\varphi}_{2}(x)]).$$

## Introducing Dependence

- We could draw two
*independent*failure times ${T}_{1}$ and ${T}_{2}$ by drawing (independently) ${U}_{j}\sim U(0,1)$ and solving for ${T}_{j}$:

$${S}_{j}(t|x)=\mathrm{exp}\{-{Z}_{j}(t){\varphi}_{j}(x)\}.$$

- If $K({u}_{1},{u}_{2})={u}_{1}{u}_{2}$ is the CDF of ${U}_{1}$ and ${U}_{2}$, the joint survivor function is

$$S({t}_{1},{t}_{2}|x)=K[\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}(x)\},\mathrm{exp}\{-{Z}_{2}(t){\varphi}_{2}(x)\}].$$

We can introduce dependence in ${T}_{1}$ and ${T}_{2}$ by introducing dependence in ${U}_{1}$ and ${U}_{2}$ via $K$.

Suppose $({U}_{1},{U}_{2})\sim K(\cdot ,\cdot )$ on $[0,1{]}^{2}$ and assume that ${Z}_{1}(0)={Z}_{2}(0)=0$.

Then the survivor function for $({T}_{1},{T}_{2})$ is

## Generalization: Mixed Proportional Hazards

- Suppose that the competing risks are independent, ${\varphi}_{j}(x)={e}^{x{\beta}_{j}}$, and that one of the covariates, $\omega $, is not observed:

$$S({t}_{1},{t}_{2}|x)={\int}_{\Omega}\mathrm{exp}[-{Z}_{1}({t}_{1}){e}^{x{\beta}_{1}+{c}_{1}\omega}]\mathrm{exp}[-{Z}_{2}({t}_{2}){e}^{x{\beta}_{2}+{c}_{2}\omega}]\phantom{\rule{thickmathspace}{0ex}}\mathrm{dG}(\omega ).$$

- We can arrive at this model by choosing $K$ such that:

$$K({\eta}_{1},{\eta}_{2})={\int}_{\Omega}{\eta}_{1}^{\mathrm{exp}({c}_{1}\omega )}{\eta}_{2}^{\mathrm{exp}({c}_{2}\omega )}\phantom{\rule{thickmathspace}{0ex}}\mathrm{dG}(\omega ).$$

## Generalization: Accelerated hazards

$$S(t|x)=\mathrm{exp}[-Z\{t\varphi (x)\}]$$

- Joint survivor with dependent competing risks:

$$S({t}_{1},{t}_{2}|x)=K(\mathrm{exp}[-{Z}_{1}\{{t}_{1}{\varphi}_{1}(x)\}],\mathrm{exp}[-{Z}_{2}\{{t}_{2}{\varphi}_{2}(x)\}]).$$

- For any $K$, the marginal distributions give rise to univariate accelerated hazard models.

## Identification Theorem

Assume that $({T}_{1},{T}_{2})$ has joint distribution (1). Then ${Z}_{1}$, ${Z}_{2}$, ${\varphi}_{1}$, ${\varphi}_{2}$, and $K$ are identified from the minimum of $({T}_{1},{T}_{2})$ under the following assumptions:

$K$ is continuously differentiable with partial derivatives ${K}_{1}$ and ${K}_{2}$ and for $i=1,2$, ${\mathrm{lim}}_{n\to \mathrm{\infty}}{K}_{i}({\eta}_{1n},{\eta}_{2n})$ is finite for all sequences ${\eta}_{1n}$, ${\eta}_{2n}$ for which ${\eta}_{1n}\to 1$ and ${\eta}_{2n}\to 1$ for $n\to \mathrm{\infty}$. We also assume that $K$ is strictly increasing in each of its arguments.

${Z}_{1}(1)={Z}_{2}(1)=1$ and ${\varphi}_{1}({x}_{0})={\varphi}_{2}({x}_{0})=1$ for some ${x}_{0}$.

The support of $\{{\varphi}_{1}(x),{\varphi}_{2}(x)\}$ is $(0,\mathrm{\infty})\times (0,\mathrm{\infty})$.

${Z}_{1}$ and ${Z}_{2}$ are nonnegative, differentiable, strictly increasing functions, except that we allow them to be infinite for finite $t$.

Notes about these assumptions:

$K$ is already

*weakly*increasing.This is an innocuous normalization since ${\varphi}_{j}(x)$ and ${Z}_{j}(t)$ are not jointly identified to scale.

This is satisfied, for example, when ${\varphi}_{j}(x)=\mathrm{exp}(x{\beta}_{j})$ and there is a common covariate with support $\mathbb{R}$ and different coefficients.

## Mapping Observables to Unobservables

Observed distributions:

$${Q}_{1}(t|x)=\mathrm{Pr}({T}_{1}\ge t,{T}_{2}\ge {T}_{1}|x)\phantom{\rule{1em}{0ex}}{Q}_{2}(t|x)=\mathrm{Pr}({T}_{2}\ge t,{T}_{1}\ge {T}_{2}|x).$$

Tsiatis (1975) establishes the following mappings:

$$\frac{\partial {Q}_{1}}{\partial t}(t|x)={\left[\frac{\partial S}{\partial {t}_{1}}\right]}_{{t}_{1}={t}_{2}=t}\phantom{\rule{1em}{0ex}}\frac{\partial {Q}_{2}}{\partial t}(t|x)={\left[\frac{\partial S}{\partial {t}_{2}}\right]}_{{t}_{1}={t}_{2}=t}.$$

We have $$\frac{\partial {Q}_{1}}{\partial t}(t|x)=-{K}_{1}[\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}(x)\},\mathrm{exp}\{-{Z}_{2}(t){\varphi}_{2}(x)\}]\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}(x)\}Z{\prime}_{1}(t){\varphi}_{1}(x).$$

## Identification of ${\varphi}_{j}$

Taking the ratio of $\frac{\partial {Q}_{1}(t|x)}{\partial t}$ at $x$ and ${x}_{0}$ yields

$$\frac{{K}_{1}[\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}(x)\},\mathrm{exp}\{-{Z}_{2}(t){\varphi}_{2}(x)\}]\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}(x)\}Z{\prime}_{1}(t){\varphi}_{1}(x)}{{K}_{1}[\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}({x}_{0})\},\mathrm{exp}\{-{Z}_{2}(t){\varphi}_{2}({x}_{0})\}]\mathrm{exp}\{-{Z}_{1}(t){\varphi}_{1}({x}_{0})\}Z{\prime}_{1}(t){\varphi}_{1}({x}_{0}).}$$

Taking $t\to 0$ and using the normalization yields ${\varphi}_{1}(x)$. Our choice of $x$ was arbitrary so ${\varphi}_{1}(x)$ is identified on the entire support of $X$. Similarly for ${\varphi}_{2}(x)$.

## Identification of $K$

We know $S(t,t|x)$ since $S(t,t|x)={Q}_{1}(t|x)+{Q}_{2}(t|x)$. Furthermore, $$S(t,t|x)=K(\mathrm{exp}[-{Z}_{1}(t){\varphi}_{1}(x)],\mathrm{exp}[-{Z}_{2}(t){\varphi}_{2}(x)]).$$

Setting $t=1$ gives $$S(1,1|x)=K(\mathrm{exp}[-{\varphi}_{1}(x)],\mathrm{exp}[-{\varphi}_{2}(x)]).$$ and letting ${\varphi}_{1}(x)$ and ${\varphi}_{2}(x)$ vary over $(0,\mathrm{\infty}{)}^{2}$ (by Assumption 3) yields $K$.

## Identification of ${Z}_{j}$

$$S(t,t|{x}_{n})=K(\mathrm{exp}[-{Z}_{1}(t){\varphi}_{1}({x}_{n})],\mathrm{exp}[-{Z}_{2}(t){\varphi}_{2}({x}_{n})])$$

Let ${\varphi}_{2}(x)\to 0$ while holding ${\varphi}_{1}(x)$ fixed.

Then $$S(t,t|x)\to K(\mathrm{exp}[-{Z}_{1}(t){\varphi}_{1}(x)],1).$$

Since $K$ and ${\varphi}_{1}$ are known and $K$ is strictly increasing in both arguments, we have ${Z}_{1}(1)=1$ for any $t$.

Similarly for ${Z}_{2}(t)$.

## Conclusion

**Identification argument:**

Given the distribution of $(T,I)$ and exploiting multiplicative separability gives us ${\varphi}_{j}(x)$ for $j=1,2$.

Using the full range of ${\varphi}_{j}(x)$ on $(0,\mathrm{\infty})$ yields $K$.

Using $K$, ${\varphi}_{j}$, and related properties gives us ${Z}_{j}(t)$.

**Implications of Nonparametric Identification:**

Identification does not depend on parametric functional forms or assumed forms of risk dependence (modulo separability of the hazard).

Highlights the role of regressors in identification in contrast to the Cox-Tsiatis nonidentification result.

Suggests the possibility of a nonparametric estimator.