Ad Hoc MS Program Director; Associate Professor of Statistics
Policy makers and practitioners are increasingly called upon to make decisions on the basis of scientific evidence, particularly on results from large randomized trials and on the combination of results across many smaller trials (via meta-analysis). My research focuses on the development of statistical methods and tools for making these ‘causal generalizations’. With regards to large randomized trials, I am interested in developing methods to improve their generalizability and external validity, particularly in education and psychology. This includes the development of improved research designs as well as the use of propensity score methods for improved estimation. My research in meta-analysis focuses on methods for modeling and adjusting for dependence between effect sizes. Here my interest is in the development of small-sample adjustments for cluster robust variance estimation – methods that have application not only in meta-analysis but also in economics and survey sampling. To date, my research has been funded by the National Science Foundation, the Institute for Education Sciences, the Spencer Foundation, and the Raikes Foundation.
Tipton, E., Pustejovsky, J., & Ahmadi, H. (2019) A History of Meta-Regression: Technical, Conceptual, and Practical Developments between 1974 and 2018. Research Synthesis Methods, 10(2): 161-179.
Pustejovsky, J. & Tipton, E. (2018) Small sample methods for cluster-robust variance estimation and hypothesis testing in fixed effects models. Journal of Business and Economic Statistics, 36(4): 672-683.
Tipton, E. & Peck, L. (2017) A Design-based approach to improve external validity in welfare policy evaluations. Evaluation Review (Special Issue: External Validity 1), 41(4): 326 – 356.
Tipton, E. & Shuster, J.J. (2017) A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach. Statistics in Medicine, 36(23), 3621-3635.
Tipton, E. & Pustejovsky, J. (2015) Small-sample adjustments to multivariate hypothesis tests in robust variance estimation in meta-regression. Journal of Educational and Behavioral Statistics, 40(6): 604-634.
Tipton, E. (2015) Small sample adjustments for robust variance estimation with meta-regression. Psychological Methods, 20(3): 375 – 393.
Tipton, E. (2014) How generalizable is your experiment? Comparing a sample and population through a generalizability index. Journal of Educational and Behavioral Statistics, 39(6): 478 – 501.
Tipton, E. (2013) Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts. Journal of Educational and Behavioral Statistics, 38: 239-266.