Poster Paper: Mental Health Measurement in Program Evaluation: An Example from the Oregon Health Insurance Experiment

Thursday, November 8, 2018
Exhibit Hall C - Exhibit Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Y. Nina Gao, University of Chicago


Economists are occasionally interested in measuring how public policies impact mental health. For such evaluations, past practice dictates that a diagnostic screening questionnaire score, or a dichotomization thereof, is an appropriate proxy for mental health. However, this practice comes at the cost of significant attenuation bias. In this study, I use public data from the Oregon Health Insurance Experiment (OHIE) and a bifactor model, borrowed from the psychometric literature, to study the relationship between items and generate predicted mental health based on these item responses. I subsequently re-analyze the study data using the full longitudinal data sample.

I find that items from clinical questionnaires perform stably relative to each other across time. These stable relationships between items can be used to model mental health across time even if the list of administered items across periods is not identical. This modeling expands the possibilities for longitudinal analysis as well as improving the power of detecting changes in mental health.

Through using response modeling, we can potentially change our interpretation of study findings, such as in the case of OHIE longitudinal data. Prior to response modeling, researchers were limited to analysis of data collected after one year. By reconciling the scales used across study periods, I find a previously under-appreciated imbalance in mental health in favor of the treatment group in the baseline period. This baseline imbalance significantly attenuates the estimated unadjusted difference-in-difference effects within the longitudinal OHIE public data, such that, without baseline adjustment the measured treatment effect is not distinguishable from zero. Once adjusting for baseline mental health-time effects, I find that the treatment effect of Medicaid is 0.017 (se=.004) standard deviations of mental health (compared to 0.2 standard deviations diff-in-diff estimates reported by Baicker et al. 2013). This effect was primarily driven by large treatment effect at 6 months for the group of participants who started out with below median mental health, which decreased over time.