Date of this Version
Journal of the American Statistical Association
In an observational study of treatment effects, subjects are not randomly assigned to treatment or control, so differing outcomes in treated and control groups may reflect a bias from nonrandom assignment rather than a treatment effect. After adjusting for measured pretreatment covariates, perhaps by matching, a sensitivity analysis determines the magnitude of bias from an unmeasured covariate that would need to be present to alter the conclusions of the naive analysis that presumes adjustments eliminated all bias. Other things being equal, larger effects tend to be less sensitive to bias than smaller effects. Effect modification is an interaction between a treatment and a pretreatment covariate controlled by matching, so that the treatment effect is larger at some values of the covariate than at others. In the presence of effect modification, it is possible that results are less sensitive to bias in subgroups experiencing larger effects. Two cases are considered: (i) an a priori grouping into a few categories based on covariates controlled by matching and (ii) a grouping discovered empirically in the data at hand. In case (i), subgroup specific bounds on p-values are combined using the truncated product of p-values. In case (ii), information that is fixed under the null hypothesis of no treatment effect is used to partition matched pairs in the hope of identifying pairs with larger effects. The methods are evaluated using an asymptotic device, the design sensitivity, and using simulation. Sensitivity analysis for a test of the global null hypothesis of no effect is converted to sensitivity analyses for subgroup analyses using closed testing. A study of an intervention to control malaria in Africa is used to illustrate.
This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of the American Statistical Association on 20 Nov 2012, available online: http://wwww.tandfonline.com/10.1080/01621459.2012.742018.
Fisher's combination of p-values, power of a sensitivity analysis, sensitivity analysis, Stephenson's test, truncated product of p-values, U-statistic, Wilcoxon test
Hsu, J. Y., Small, D. S., & Rosenbaum, P. R. (2013). Effect Modification and Design Sensitivity in Observational Studies. Journal of the American Statistical Association, 108 (501), 135-148. http://dx.doi.org/10.1080/01621459.2012.742018
Date Posted: 27 November 2017