Reducing fear of program evaluation: An interview with Paul Decker, President of Mathematica Policy Research and 2013 President of APPAM – Episode #36

How can public leaders encourage greater use of program evaluation to learn what works and to improve outcomes for citizens? One important part of the answer, noted Paul Decker in his 2013 Presidential Address at the APPAM public policy conference, is to address the frequent fear associated with program evaluation. That fear stems from framing evaluation as simply a tool to justify continued or discontinued funding for a program. This “rigid framing of the role of program evaluation,” he argues, “poses a set of false choices that I believe ultimately undermines the use and creation of evidence.”

To learn more, we’re join by Paul Decker who recently concluded his year-long term as President of APPAM — the Association for Public Policy Analysis and Management. Since 2007, he has also been the President and Chief Executive Officer of Mathematica Policy Research. He is a nationally recognized expert in the design and implementation of evaluations of education and workforce development programs.

Using impact evaluation to improve program performance: An interview with Rachel Glennester, Executive Director, Jameel Poverty Action Lab – Episode #35

 How can public officials move beyond guesses and hunches to more data-driven decision-making? One approach borrows from the health field, where randomized drug trials are a standard way to test the efficacy of potential pharmaceutical treatments. Leading companies also use randomized experiments — making operational changes to see if they work better — to improve their products and services. This same approach can be used in public policy, with individuals randomly assigned to a program group and a control group, in order to rigorously test what works and improve program performance. The approach is known as randomized controlled trials (RCTs) or impact evaluations using an experimental design.

To learn more, we’re joined by Rachel Glennerster, the Executive Director of the Jameel Poverty Action Lab or J-PAL based at MIT. She’s also the co-author, with Kudzai Takavarasha, of the new book Running Randomized Evaluations. It is a how-to guide for conducing valid randomized impact evaluations of social programs in developing countries. Our interview focuses on broader insights that are applicable to policymakers and public managers in the United States.

Web extra: Rachel Glennerster talks about some of the ethical issues involved in using randomized controlled trials in public policy. [click here]

Focusing a social service agency on results and improved outcomes: An interview with Reggie Bicha, Executive Director of the Colorado Department of Human Services – Episode #34

How can a social service agency, whether at the state or local level, create an organizational culture focused on results? How can it create ongoing, meaningful conversations among agency leaders and staff that drive meaningful improvements? To explore these issues, we’re joined by Reggie Bicha (@reggiebicha), the Executive Director of the Colorado Department of Human Services under Governor John Hickenlooper and the Former Secretary of the Wisconsin Department of Children and Families under Governor Jim Doyle.

Our interview focuses on the performance leadership strategy that Reggie Bicha developed and launched in 2009 in Wisconsin called KidStat. As Secretary, he used the KidStat process to create alignment within the agency, helping ensure that “the right staff with the right resources delivering the right programs at the right time to the right people that’s going to help our state achieve what it is that I’ve been assigned by the legislator and governor to achieve.” Today, as head of the Department of Human Services in Colorado, Reggie Bicha continues to use a “stat” initiative to focus his agency on results, called C-Stat.

Web extra:  In our web extra, Nikki Hatch, who was formerly in charge of the KidStat process at the Wisconsin Department of Children and Families and today continues to work with Reggie Bicha as the Deputy Executive Director of Operations at the Colorado Department of Human Services, provides an example of how KidStat led to a specific improvement in performance. [click here]

Building an evidence base for an agency’s programs: An interview with Chris Spera, Chief Evaluation Officer, Corporation for National and Community Service – Episode #34

How can a public agency start to build an evidence base about what works for its programs? How can it strengthen an organizational focus on results and evidence — not just for itself, but also among the nonprofits that it funds through grants? And how can agencies use innovative tiered-evidence grant programs to focus grant dollars on approaches backed by strong evidence while still allowing promising new approaches to be tested?

To explore these issues, we’re joined by Chris Spera, the Director of Research and Evaluation (i.e., Chief Evaluation Officer) at the Corporation for National and Community Service (CNCS). CNCS is $1 billion federal agency that invests in community programs and interventions and helps more than five million Americans improve the lives of their fellow citizens through service. Its signature programs include AmeriCorps, Senior Corps and the Social Innovation Fund. [Note, since this interview, Chris has taken a new position at Abt Associates.]

Evidence-based reform in education: An interview with Robert Slavin, Professor, Johns Hopkins University – Episode #33

 The field of education has seen a growing emphasis on the use on evidence for decision making about programs and practices. Even so, much more progress is needed. To learn more, we’re joined by Robert Slavin (@RobertSlavin), a leader in the area of evidence policy in education.

Dr. Slavin is the Director of the Center for Research and Reform in Education at Johns Hopkins University, Chairman of the Success for All Foundation, a part-time professor at the Institute for Effective Education at the University of York (England) and a columnist for the Huffington Post. He recently gave the keynote address to the American Psychological Association titled “Evidence-based reform in education.”

The interview discusses the current role of evidence in education, a vision for its greater use, and examples of efforts to use and grow the evidence base in education and encourage research-based reform, including the Investing in Innovation (i3) grant program at the U.S. Department of Education.

State’s Use of Cost-Benefit Analysis: An interview with Gary VanLandingham, Pew-MacArthur Results First Initiative – Episode #33

 The Results First Initiative, a joint project of the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation, launched in 2011 to help states strengthen their ability to use cost-benefit analysis to invest in policies and programs that work and that are cost effective. Their recent report, “States Use of Cost-Benefit Analysis,” is a first of its kind study to measure states’ use of this analytical tool. To learn more, we’re joined by Gary VanLandingham, Director of the Results First Initiative. In particular, we discuss:

  • To what extent are states today conducting cost-benefit analyses?
  • In which policy areas do those analyses tend to focus?
  • To what extent do states use the results when making policy and budget decisions?
  • What challenges do states face in conducting and using these studies?

Web extras: Gary VanLandingham discusses the study’s findings about where the capacity to do cost-benefit analyses is being built within states — for example, within governors’ offices or state legislatures. [click here] He also provides an update about the Results First project. [click here]

Update (2016): A video overview of the Results First Initiative is available. [click here]

Fighting for reliable evidence: An interview with Judith Gueron, MDRC, and Howard Rolston, Abt Associates – Episode #32

Gueron RolstonJudith Gueron and Howard Rolston join us to discuss their new book, Fighting for Reliable Evidence, published by the Russell Sage Foundation. It describes the four-decade effort to develop and use rigorous evidence from random assignment studies to improve social policy, particularly in the areas of welfare-to-work and anti-poverty policy.

Judith Gueron is the President Emerita at the social policy research firm MDRC. She joined MDRC as Research Director at its founding in 1974 and served as its President from 1986 to 2004. Howard Roston is a principal associate at Abt Associates. He served from 1986 to 2004 as the Director of the Office of Planning, Research and Evaluation (OPRE) at the Administration for Children and Families (ACF) and the U.S Department of Health and Human Services.

 

Local perspectives on PerformanceStat: An interview with David Gottesman, Montgomery County, Maryland, and Greg Useem, City of Alexandria, Virginia – Episode #31

The “stat” approach, also called “PerformanceStat,” is a results-focused leadership strategy probably best known from CitiStat in Baltimore, but also used by dozens of local, state and federal offices and programs.

A key element involves “stat meetings” in which different agencies within a jurisdiction (or larger department) meet with  leadership every few weeks or months to review their performance measures and discuss ways to improve performance. It is designed to create an ongoing, data-driven, substantive discussion about what’s working, what’s not, and next steps for strengthening results.

To learn more about PerformanceStat at the local level, we’re joined by two leaders of performance management efforts in the Washington DC area:

  • David Gottesman is the CountyStat Manager for the Montgomery County Executive in Maryland; and
  • Greg Useem is the Chief Performance Officer for the City of Alexandria, Virginia and runs AlexStat.

Web extra: David Gottesman describes a new DC-regional network for performance management practitioners being launched in October 2013, in partnership with the University of Maryland. [click here] The effort is modeled in part on the StatNet initiative in New England.

A provider’s perspective on random assignment evaluation: An interview with Sarah Hurley, Youth Villages – Episode #30

What is it like for a nonprofit social service provider to be part of a random assignment evaluation, also known as a randomized control trial (RCT)? And what are the key benefits and challenges of being involved in this type of evaluation? To explore these questions, we’re joined by Sarah Hurley, the Director of Research at Youth Villages.

Youth Villages, based in Memphis, Tennessee, provides behavioral health services to children and adolescents in 11 states and the District of Columbia. In 2008, it embarked on a random assignment evaluation of one of its signature programs, the Transitional Living Program. The study is being conducted by an independent evaluation firm and, with a sample size of more than 1,300, is the largest trial to date of an intervention for youth aging out foster care. Preliminary year-one results are due in 2014, but already, Youth Villages provides first hand experience with being part of a rigorous evaluation.

Web extras: Sarah Hurley discusses some of the ethical considerations involved in implementing a random assignment evaluation. [click here] She also provides her advice for other social service providers, or other types of organizations, that are considering undertaking an RCT. [click here]

Important tools for evidence-based decision making: An interview with Margery Turner, Urban Institute – Episode #30

What are some of the key tools that that policymakers and practitioners can draw on to to inform and strengthen decisions? To explore these issues, we’re joined by Margery Turner (@maturner), a Senior Vice President at the Urban Institute. Her recent testimony before the House Ways and Means Committee’s Subcommittee on Human Resources, titled, “Evidence-Based Policymaking Requires a Portfolio of Tools.”

In her testimony, she write: “Today more than ever, policymakers need evidence to help inform major decisions about program design, implementation, and funding. Whether assessing the likely effectiveness of a new initiative, comparing competing approaches to a given problem, figuring out where to cut, or refining a program’s rules to make it more cost effective, decisions based on rigorous evidence make better use of scarce public dollars and improve outcomes for people.”

The tools she discusses in the interview include diagnostic research, microsimulation models, implementation research, randomized controlled trials and rapid, operationally-focused experimentation.