Effect Modification and Design Sensitivity in Observational Studies

Loading...
Thumbnail Image
Penn collection
Statistics Papers
Degree type
Discipline
Subject
Fisher's combination of p-values
power of a sensitivity analysis
sensitivity analysis
Stephenson's test
truncated product of p-values
U-statistic
Wilcoxon test
Statistics and Probability
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Hsu, Jesse Y
Small, Dylan S
Rosenbaum, Paul R
Contributor
Abstract

In an observational study of treatment effects, subjects are not randomly assigned to treatment or control, so differing outcomes in treated and control groups may reflect a bias from nonrandom assignment rather than a treatment effect. After adjusting for measured pretreatment covariates, perhaps by matching, a sensitivity analysis determines the magnitude of bias from an unmeasured covariate that would need to be present to alter the conclusions of the naive analysis that presumes adjustments eliminated all bias. Other things being equal, larger effects tend to be less sensitive to bias than smaller effects. Effect modification is an interaction between a treatment and a pretreatment covariate controlled by matching, so that the treatment effect is larger at some values of the covariate than at others. In the presence of effect modification, it is possible that results are less sensitive to bias in subgroups experiencing larger effects. Two cases are considered: (i) an a priori grouping into a few categories based on covariates controlled by matching and (ii) a grouping discovered empirically in the data at hand. In case (i), subgroup specific bounds on p-values are combined using the truncated product of p-values. In case (ii), information that is fixed under the null hypothesis of no treatment effect is used to partition matched pairs in the hope of identifying pairs with larger effects. The methods are evaluated using an asymptotic device, the design sensitivity, and using simulation. Sensitivity analysis for a test of the global null hypothesis of no effect is converted to sensitivity analyses for subgroup analyses using closed testing. A study of an intervention to control malaria in Africa is used to illustrate.

Advisor
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Publication date
2013-01-01
Journal title
Journal of the American Statistical Association
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation
Collection