Areas of Expertise
- Program Evaluation
- Quantitative Methods
Avi Feller is an assistant professor at the Goldman School, where he works at the intersection of public policy, data science, and statistics. His methodological research centers on learning more from social policy evaluations, especially randomized experiments. His applied research focuses on working with governments on using data to design, implement, and evaluate policies. Prior to his doctoral studies, Feller served as Special Assistant to the Director at the White House Office of Management and Budget and worked at the Center on Budget and Policy Priorities. Feller received a Ph.D. in Statistics from Harvard University, an M.Sc. in Applied Statistics as a Rhodes Scholar at the University of Oxford, and a B.A. in Political Science and Applied Mathematics from Yale University.
Download a PDF (112KB, updated 11-08-2017)
2016. Psychological Science. In press. (With T. Rogers)
People are exposed to exemplary peer performances often (and sometimes by design in interventions). In two studies, we showed that exposure to exemplary peer performances can undermine motivation and success by causing people to perceive that they cannot attain their peers’ high levels of performance. It also causes de-identification with the relevant domain. We examined such discouragement by peer excellence by exploiting the incidental exposure to peers’ abilities that occurs when students are asked to assess each other’s work. Study 1 was a natural experiment in a massive open online course that employed peer assessment (N = 5,740). Exposure to exemplary peer performances caused a large proportion of students to quit the course. Study 2 explored underlying psychological mechanisms in an online replication (N = 361). Discouragement by peer excellence has theoretical implications for work on social judgment, social comparison, and reference bias and has practical implications for interventions that induce social comparisons.
Published version (474KB)
Compared to What? Variation in the Impacts of Early Childhood Education by Alternative Care-Type Settings
2016. Annals of Applied Statistics. In press. (with T. Grindal, L. Miratrix, and L. Page)
Early childhood education research often compares a group of children who receive the intervention of interest to a group of children who receive care in a range of different care settings. In this paper, we estimate differential impacts of an early childhood intervention by alternative care setting, using data from the Head Start Impact Study, a large-scale randomized evaluation. To do so, we utilize a Bayesian principal stratification framework to estimate separate impacts for two types of Compliers: those children who would otherwise be in other center-based care when assigned to control and those who would otherwise be in home-based care. We find strong, positive short-term effects of Head Start on receptive vocabulary for those Compliers who would otherwise be in home-based care. By contrast, we find no meaningful impact of Head Start on vocabulary for those Compliers who would otherwise be in other center-based care. Our findings suggest that alternative care type is a potentially important source of variation in early childhood education interventions.
Publisher's Version (4KB)
2016. Journal of the Royal Statistical Society, Series B. In press. (with P. Ding and L. Miratrix)
Applied researchers are increasingly interested in whether and how treatment effects vary in randomized evaluations, especially variation that is not explained by observed covariates. We propose a model-free approach for testing for the presence of such unexplained variation. To use this randomization-based approach, we must address the fact that the average treatment effect, which is generally the object of interest in randomized experiments, actually acts as a nuisance parameter in this setting. We explore potential solutions and advocate for a method that guarantees valid tests in finite samples despite this nuisance. We also show how this method readily extends to testing for heterogeneity beyond a given model, which can be useful for as- sessing the sufficiency of a given scientific theory. We finally apply our method to the National Head Start impact study, which is a large-scale randomized evaluation of a Federal preschool programme, finding that there is indeed significant unexplained treatment effect variation.
Principal Stratification: A Tool for Understanding Variation in Program Effects Across Endogenous Subgroups
2015. American Journal of Evaluation. (with L. Page, T. Grindal, L. Miratrix, and M.-A. Somers)
Increasingly, researchers are interested in questions regarding treatment-effect variation across partially or fully latent subgroups defined not by pretreatment characteristics but by postrandomization actions. One promising approach to address such questions is principal stratification. Under this framework, a researcher defines endogenous subgroups, or principal strata, based on post-randomization behaviors under both the observed and the counterfactual experimental conditions. These principal strata give structure to such research questions and provide a framework for determining estimation strategies to obtain desired effect estimates. This article provides a nontechnical primer to principal stratification. We review selected applications to highlight the breadth of substantive questions and methodological issues that this method can inform. We then discuss its relationship to instrumental variables analysis to address binary noncompliance in an experimental context and highlight how the framework can be generalized to handle more complex posttreatment patterns. We emphasize the counterfactual logic fundamental to principal stratification and the key assumptions that render analytic challenges more tractable. We briefly discuss technical aspects of estimation procedures, providing a short guide for interested readers.
Published Version (198KB)
2015. Emerging Trends in the Social and Behavioral Sciences. (with A. Gelman)
Hierarchical models play three important roles in modeling causal effects: (i) accounting for data collection, such as in stratified and split-plot experimental designs; (ii) adjusting for unmeasured covariates, such as in panel studies; and (iii) capturing treatment effect variation, such as in subgroup analyses. Across all three areas, hierarchical models, especially Bayesian hierarchical modeling, offer substantial benefits over classical, non-hierarchical approaches. After discussing each of these topics, we explore some recent developments in the use of hierarchical models for causal inference and conclude with some thoughts on new directions for this research area.
Published Version (3MB)
Articles and Op-Eds
The New York Times, November 12, 2012