Implementing Randomized Evaluations in Government: Lessons from the J-PAL State and Local…
Author(s): Chabrier, Julia; Hall, Todd; and Struhl, Ben.
Organizational Author(s): Abdul Latif Jameel Poverty Action Lab
Alfred P. Sloan Foundation
Resource Availability: Publicly available
Describes lessons learned from partnerships with state and local governments on how to identify opportunities for randomized evaluations, how they can be feasibly embedded into the implementation of a program or policy, and how to overcome some of the common challenges in designing and carrying them out.
“[This guide explores] partnerships with the governments that have participated in the [State and Local Innovation Initiative]…so that other governments that are interested in pursuing randomized evaluations can learn from their experience….[It provides] guidance on how to identify good opportunities for randomized evaluations, how to embed randomized evaluations into program or policy implementation, and how to overcome some of the common challenges in designing and carrying out randomized evaluations….While some of the concepts in this guide are specific to randomized evaluations, many are applicable to other methods of impact evaluation as well” (p.iii).
“Randomized evaluations (also known as randomized controlled trials or RCTs) can be a…tool for generating rigorous evidence about the effectiveness of policies and programs….[R]elatively few state and local governments have launched randomized evaluations. There are a number of potential barriers to greater adoption of randomized evaluations by state and local governments” (p.1).
“The goal of the [Initiative] is to generate evidence that state and local governments can use to improve their programs and policies and ultimately the lives of the people they serve” (p.2).
“State and local governments…are developing innovative solutions to address complex policy challenges, almost always with limited resources. Too often, they must make policy decisions without the benefit of rigorous evidence about what has been tried and proven elsewhere, or the opportunity to learn which of their own policies and programs are effective” (p.1).
“The guide is organized into six sections:
- Why [the authors] launched the…State and Local Innovation Initiative
- What is a randomized evaluation and why randomize?
- Laying the groundwork for a research project
- Identifying opportunities for randomized evaluations
- Implementing an evaluation
- Making future evaluations easier” (p.iii).
Full publication title: Implementing Randomized Evaluations in Government: Lessons from the J-PAL State and Local Innovation Initiative
(Abstractor Author and Website Staff)
Major Findings & Recommendations
The authors recommend several strategies for conducting randomized evaluations, including:
• “[A] successful research partnership involves close collaboration between the researcher and the government to design a high-quality evaluation that is also politically, ethically, and logistically feasible” (p.7).
• “Randomized evaluations are not one-size-fits-all; rather, they can be thoughtfully tailored to minimize disruption for programs implemented by multiple service providers or that involve multiple service models” (p.11).
• “[T]he optimal sample size for an evaluation depends on many factors. In general, larger sample sizes enable researchers to detect even small differences in outcomes between the treatment and control groups, whereas smaller sample sizes can only detect large differences in outcomes” (p.13).
“There are a number of circumstances in which a randomized evaluation would not be feasible or appropriate, including when:
• There is strong evidence that the program has a positive impact and…[there are sufficient] resources to serve everyone who is eligible” (p.14).
• “The program’s implementation is changing” (p.14).
• “The sample size is too small” (p.14).
• “The time and financial cost outweigh the potential benefits of the evidence generated” (p.14).
The guide also includes several case studies. For example:
“The evaluation of the Bridges to Success program in Rochester illustrates how service providers and researchers can navigate uncertainty around program enrollment and ensure that the evaluation does not reduce the number of people who would have otherwise received services” (p.21).
“In Puerto Rico, the process of designing a randomized evaluation to assess the impact of an earnings incentive and job-coaching program…began under one governor’s administration and now continues into another. By securing buy-in from staff…, drawing on support from outside stakeholders, and providing opportunities for the new administration to provide input into the evaluation, the research team has been able to sustain the project through the administration change” (p.24).
(Abstractor: Author and Website Staff)
Workforce System Strategies Content Information
You Might Also Like
- Making the Case for What Works: Using Evidence-Based Practic...
- Evidence-Building Capacity in State Workforce Agencies: Insi...
- WIOA National Convening Event Session Materials and Resource...
- Evidence-Based Practices and Process Mapping - What Works fo...
- Research and Evaluation: Impact Assessments that Guide Cont...