Determining if your program is having a positive impact (i.e., impact evaluation 101): An interview with David Evans, Senior Economist, The World Bank – Episode #122
Is my program or initiative having a positive impact?
It’s a question about which organizational leaders may want hard evidence, either to take stock and help improve program results, or to satisfy their authorizers or funders who may be asking for rigorous evidence of impact. Either way, how can you determine the impact of your program? And which strategies may sound useful but are unlikely to produce accurate answers?
To examine these these questions and get a “101” on impact evaluation, we’re joined by David Evans (@tukopamoja). He is a Senior Economist at the World Bank and the co-author, with Bruce Wydick, of a recent post on the Bank’s Development Impact blog on this topic.
The interview covers:
- The concept of impact
- Ways that organization could try to estimate impact that generally won’t be accurate
- Three strategies to more accurately estimate program impact:
- Using a lottery, aka a randomized experiment
- Using an eligibility cutoff, aka regression discontinuity design
- Using before and after data for both participants and nonparticipants, aka a differences-in-differences approach
- Factors to guide the choice of one impact evaluation strategy over another
Improving schooling outcomes of disadvantaged youth is a top policy priority in the United States, but few interventions have produced convincing evidence that they can improve those outcomes, especially for adolescent youth — the age at which socially costly outcomes occur, such as high school dropout. As a result, it may be conventional wisdom that, by adolescence, it is too late and too costly to improve academic outcomes of children in poverty.
A Social Impact Bond (SIB) uses private funds – from philanthropy or other investors — to pay for a social, educational, or health programs. Importantly, the government only repays investors, plus a return, if pre-specified results are achieved. A new
Over the last 15 years, the field of education has become considerably more evidence focused, including a growing number of high-quality studies about how to help students succeed in school. An important catalyst has been the 
Milwaukee’s Teen Pregnancy Prevention Initiative, launched in 2008, is a citywide effort led by the
In program evaluation, using the most rigorous methods possible is essential for producing credible research findings. But beyond the goal of rigor, relevance is important too. In particular, the more that evaluations are able to address specific research or implementation questions that are of interest to practitioners and policymakers, the more likely that the findings will actually get used.
Can regular, detailed information sent to parents about their students’ progress lead to improved student achievement? That question was put to the test by in a
Today, results-focused cities are using data to improve city services, boost the quality of life, and literally save lives. The City of Las Vegas has gained a reputation for its data-focused approach to addressing important city challenges. A good example is its effort to reduce traffic accidents, first by focusing on reducing left turn crashes and later by focusing on the 50 most dangerous intersections. The results have been dramatic.
The City of New Orleans under Mayor Mitch Landrieu has gained a reputation as being one of the most innovative and data-driven city governments. An important element in those efforts is the Office of Performance and Accountability, launched in 2011. The mission of the office is to use data to set goals, track performance, and drive results across city government. In 2015, it launched an analytics unit called 