Statistics Papers

Document Type

Journal Article

Date of this Version

2011

Publication Source

Journal of the American Statistical Association

Volume

106

Issue

494

Start Page

511

Last Page

524

DOI

10.1198/jasa.2011.ap10604

Abstract

During a few years around the turn of the millennium, a series of local hospitals in Philadelphia closed their obstetrics units, with the consequence that many mothers-to-be arrived unexpectedly at the city’s large, regional teaching hospitals whose obstetrics units remained open. Nothing comparable happened in other United States cities, where there were only sporadic changes in the availability of obstetrics units. What effect did these closures have on mothers and their newborns? We study this question by comparing Philadelphia before and after the closures to a control Philadelphia constructed from elsewhere in Pennsylvania, California, and Missouri, matching mothers for 59 observed covariates including year of birth. The analysis focuses on the period 1995–1996, when there were no closures, and the period 1997–1999 when five hospitals abruptly closed their obstetrics units. Using a new sensitivity analysis for difference-in-differences with binary outcomes, we examine the possibility that Philadelphia mothers differed from control mothers in terms of some covariate not measured, and perhaps the distribution of that unobserved covariate changed in a different way in Philadelphia and control–Philadelphia in the years before and after the closures. We illustrate two recently proposed techniques for the design and analysis of observational studies, namely split samples and evidence factors. To boost insensitivity to unmeasured bias, we drew a small random planning sample of about 26,000 mothers in 13,000 pairs and used them to frame hypotheses that promised to be less sensitive to bias; then these hypotheses were tested on the large, independent complementary analysis sample of nearly 240,000 mothers in 120,000 pairs. The splitting was successful twice over: (i) it successfully identified an interesting and moderately insensitive conclusion, (ii) by comparison of the planning and analysis samples, it is clearly seen to have avoided a exaggerated claim of insensitivity to unmeasured bias that might have occurred by focusing on the least sensitive of many findings. Also, we identified two approximate evidence factors and one test for unmeasured bias: (i) factor 1 compared Philadelphia to control before and after the closures, (ii) factor 2 focused on the years 1997–1999 of abrupt closures and compared zip codes with closures to zip codes without closures, (iii) and the test for bias focused on the years 1995–1996 prior to closures and compared zip codes which would have closures in 1997–1999 to zip codes without closures in 1997–1999—any ostensible effect found in that last comparison is surely bias from the characteristics of Philadelphia zip codes in which closures took place. Approximate evidence factors provide nearly independent tests of a null hypothesis such that the evidence in each factor would be unaffected by certain biases that would invalidate the other factor.

Copyright/Permission Statement

This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of the American Statistical Association on 24 Jan 2012, available online: http://wwww.tandfonline.com/10.1198/jasa.2011.ap10604.

Keywords

design sensitivity, difference-in-differences, evidence factor, optimal matching, sensitivity analysis, test for bias

Share

COinS
 

Date Posted: 27 November 2017