The Moving to Opportunity Demonstration’s long-term findings: An interview with Lawrence Katz, Professor, Harvard University – Episode #29

Katz

Individuals in economically disadvantaged neighborhoods fare less well across a range of indicators, but to what extent do poor neighborhoods per se contribute to this? Investigating the answer, which requires isolating the effect of neighborhoods, can help policymakers craft more effective anti-poverty policies.

This was the motivation behind the Moving to Opportunity (MTO) demonstration, one of the most important studies of poverty and the effects of neighborhoods in the United States. MTO was a randomized social experiment sponsored by the U.S. Department of Housing and Urban Development. It launched in 1994 and involved almost 5,000 families with children living in high-poverty public housing projects in five cities: Baltimore, Boston, Chicago, Los Angeles, and New York City.

To learn more about the MTO demonstration — including the questions that motivated the study, the study design, and its results — we’re joined by Lawrence Katz, an economics professor at Harvard University who helped lead the long-term, 15-year evaluation of MTO.

Harnessing Silicon Valley funding approaches to drive breakthrough solutions in the public sector: An interview with Jeffrey Brown, Development Innovation Ventures (DIV) Program at USAID – Episode #28

Jeff_BrownThe Development Innovation Ventures Program (DIV) at USAID launched in 2010 with a mission to find, test and scale ideas that could radically improve global prosperity. Its key features include crowd sourcing of breakthrough solutions; staged funding to test those solutions; and prioritizing grant proposals based on their cost effectiveness, their use of rigorous evidence to show impact, and their ability to be scaled up over time.

While DIV focuses on international development, its evidence-based, outcome focused grant design has relevance for public leaders in many fields and levels of government. In fact, DIV is one of a small but growing number of federal grant programs using staged funding, also called tiered-evidence grants or innovation funds. Staged funding helps public agencies focus grant dollars on approaches backed by strong evidence, while still encouraging innovative new approaches.

To learn more, we’re joined by Jeffrey Brown (@jeffhbrown) who leads DIV at USAID. The interview includes: An overview of DIV; Stage 1 funding and an example related to strengthening the work of frontline health workers in India; Stage 2 funding and an example related to bringing safe power sources to villages in India; Stage 3 funding and an example related to promoting safe drinking water in Kenya; an overview of what it takes to run a staged grant program; and the applicability of DIV’s approach to other policy areas.

Web extra: Jeffrey Brown discusses the usefulness of building an rigorous evaluation strategies into grant-funded projects before they are launched, rather than trying to measure impact later on or after the fact. [click here]

New York City’s Social Impact Bond, the first in the U.S.: An interview with Linda Gibbs, Deputy Mayor for Health and Human Services, New York City – Episode #27

Linda_GibbsIn 2012, New York City launched the first Social Impact Bond (SIB) in the United States. Under the SIB model, investors provide up-front capital for preventive interventions and government only pays when measurable results are achieved. In New York City’s case, the SIB will fund services to about 3,000 adolescent men (ages 16 to 18) who are jailed at Rikers Island. The goal of the initiative, which will run from 2012 to 2015, is to reduce recidivism and its related budgetary and social costs.

To tell us more, we’re joined by Linda Gibbs (@lindagibbs), the Deputy Mayor for Health and Human Services under Mayor Michael Bloomberg.

Social Impact Bonds – also referred to as Pay for Success approaches – are being considered in a growing number of cities, states and federal agencies as a way to speed up the pace of social innovation, fund preventive services at a time of tight budgets, and improve social policy outcomes and reduce costs. The results from New York City’s SIB will therefore have implications across the nation.

As you listen to the interview, it may be helpful to view this diagram of the organizations involved in the Rikers Island SIB. To learn more SIBs/Pay for Success more generally, resources include the learning hub maintained by the Nonprofit Finance Fund, Harvard’s SIB Lab, and this overview article from Community Development Investment Review.

Web extra: Linda Gibbs discusses the city’s broader effort to strengthen evidence-based policy, including strengthening its analytic capabilities to learn and do what works in social policy. [click here]

The launch of J-PAL North America: An interview with Lawrence Katz, Harvard University – Episode #26

KatzThe Jameel Poverty Action Lab, or J-PAL, was established in 2003 at MIT and is today a global network of researchers who use randomized evaluations to answer important questions within anti-poverty policy. Their mission is to reduce poverty by ensuring that policy is based on scientific evidence and research is translated into action. Their website includes summaries of more than 400 randomized evaluations conducted by members of the J-PAL network in 53 countries.

This year, J-PAL is launching a new initiative, J-PAL North America, to help bring new insights to important social policy questions in the United States and North America. To learn more, we’re joined by Lawrence Katz, a Professor of Economics at Harvard University. He is also one of two Scientific Directors of J-PAL North America, along with Amy Finkelstein of MIT.

The interview is designed to give public leaders an overview of this new resource. In particular, J-PAL can help program managers and other government leaders obtain the technical know-how and the resources (including potential partners with university experts) to use rigorous methods to answer critical policy and program questions. That, in turn, can improve program outcomes and cost effectiveness.

Web extra: Lawrence Katz describes his broader vision for the use of evidence and evaluation in government in the federal, state and local levels and what some important next steps are. [click here]

Using Lean Six Sigma to improve results in government: An interview with Jim Robinson, The George Washington University Center for Excellence in Public Leadership – Episode #25

jim-robinsonContinually improving service delivery is a critical ability for high-performing public agencies at the federal, state and levels — whether it’s innovating to better meet program participants’ needs, increasing efficiency, and solving problems in service delivery. A concept that public managers have borrowed from the private sector to improve service delivery is Lean Six Sigma. It’s a combination of two other management approaches, “Lean” and “Six Sigma.”

As management professor John Maleyeff has noted, Lean Six Sigma “provides a means to improve the delivery of services using a disciplined, project-based approach.” It uses a systematic five step approach called DMAIC. It stands for Define (create problem statement and customer value definition); Measure (map the process and collect associated data); Analyze (identify problems and significant waste); Improve (find ways to eliminate waste and/or add value); and Control (develop implementation and follow-up plan). While those steps are central to the approach, one can use a variety of tools to achieve them, so there is considerably flexibility in one’s approach.

To learn more about the concept and how it can be used in the public sector, we speak with Jim Robinson. He is the Executive Director of The George Washington University Center for Excellence in Public Leadership. He has more than 25 years of experience, particularly in the private sector, working on issues of large-scale organization change and the building of high commitment/high performance organizations.

Using LouieStat and collaboration across agencies to improve results in Louisville: An interview with Theresa Reno-Weber, City of Louisville – Episode #24

Theresa Reno WeberSince becoming Mayor of Louisville in 2011, Greg Fischer and his team have launched a number of initiatives to strengthen the city government’s ability to improve on results and address challenges that span traditional agency silos. Initiatives include:

  • LouieStat: Modeled after CitiStat and other “Stat” initiatives, LouieStat uses ongoing data-driven discussions between the Mayor’s Office and agency leaders about agency results and ways to improve those results.
  • Cross-functional teams: For issues too big to solve through the LouieStat process, the Mayor’s Office establishes cross-functional teams of city employees (from directors to line employees) to examine root causes using focus groups and other analytic tools and then propose solutions within 8 to 12 weeks — recommendations that are often approved on the spot by the mayor. To support teams’ efforts, the city provides training to team members on topics such as “plan, do, check, act,” lean process improvement, project management, and data collection/analysis.
  • Cross-agency LouieStat meetings: While most LouieStat meetings focus on specific agencies, the city also runs some LouieStat meetings that are focused on cross-agency topics. An example is VAPStat, focused on tackling the issue of vacant and abandoned properties.

To learn more about these efforts, we’re joined by Theresa Reno-Weber, the city’s Chief of Performance Improvement. She was previously a senior consultant at McKinsey & Company and served for ten years in the U.S. Coast Guard.

Web extras: Theresa Reno-Weber shares her advise to cities and other jurisdictions aiming to strengthen their use of data to improve results [click here]. She also describes a set of questions that helped the Fischer Administration guide their broader strategy, including “What is the city government currently doing?”, “How well is city government performing?” and “How do we improve?” [click here]

Note: To see the “leadership lessons from a dancing guy” video referenced by Theresa, click here.

The PerformanceStat Potential: An Interview with Bob Behn, Professor, Harvard Kennedy School – Episode #24

BobBehnBob Behn of the Harvard Kennedy School joins us to discuss some of the insights from his forthcoming [published in June 2014] book, The PerformanceStat Potential: A Leadership Strategy for Producing Results.

PerformanceStat is Professor Behn’s term for the numerous “Stat” initiatives around the nation that, together, constitute one of the most important developments in public management and leadership in recent decades. From CitiStat in Baltimore to StateStat in Maryland to HUDStat at the U.S. Department of Housing and Urban Development to dozens of other examples, the PerformanceStat approach is an accountability and leadership strategy that involves ongoing, data-driven meetings to review performance and discuss ways to improve performance.

Bob Behn is one of the nation’s foremost experts on performance management and on the leadership challenge of improving the performance of public agencies. He is the faculty chair of the Kennedy School’s executive program, Driving Government Performance: Leadership Strategies that Produce Results. He also writes the on-line monthly Bob Behn’s Performance Leadership Report.

Data for decision making in government: An interview with Benjamin Jones, Kellogg School of Management – Episode #23

As management guru Peter Drucker noted, you can’t manage what you don’t measure. Managers need data, in other words, to inform their decisions. But what types of data?

Benjamin Jones joins us to discuss different types of data that can be used to make decisions, including anecdotes, summary statistics, correlations, and the results from experiments (also known as randomized controlled trials). Each type of data has different advantages.

We also explore the difference between “operational” experiments (ones that test how to improve programs or services by comparing different approaches) and “existential” experiments (ones that test whether a program works or not) and hear about why the former are often the more relevant in public policy settings.

Benjamin Jones is an Associate Professor of Management and Strategy at the Kellogg School of Management at Northwestern University and the faculty director of the Kellogg Innovation and Entrepreneurship Initiative. He served as the senior economist at the White House Council of Economic Advisers and earlier served in the U.S. Department of the Treasury.

Using rigorous program evaluation to learn what works: An interview with Robinson Hollister, Swarthmore College – Episode #22

What does the term “counterfactual” mean and why is it important for rigorous program evaluation? What are the advantages of randomized controlled trials (RCTs) over non-experimental approaches to evaluation? And what surprising finding from the National Supported Work Demonstration showed the usefulness of evaluation with an experimental design?

We explore these and other questions with Robinson (Rob) Hollister, one of the nation’s experts on program evaluation, in an interview designed to give program managers and policy officials an accessible introduction to several key evaluation topics.

Professor Hollister is the Joseph Wharton Professor of Economics at Swarthmore College. He is a past winner of the Peter H. Rossi Award from the Association for Public Policy Analysis and Management (APPAM) for his contributions to the field of program evaluation. He has been involved in the design and evaluation of numerous programs in the fields of employment and training, education, welfare reform, health and education. For a more detailed biography, see here.

Web extra: We explore additional program evaluation topics with Rob Hollister in the web extra:

  • An example of an RCT (focused on hormone replacement) that produced more accurate findings than a comparison group study [click here]
  • Why “keep it simple” is useful advice with RCTs [click here]
  • What “fidelity to the model” means and how much emphasis it deserves [click here]
  • The ways in which replication can be useful [click here]

A tip: Evaluators use several terms to describe the same approach, including “randomized controlled trial,” “experimental evaluation,” “evaluation with an experimental design” and “impact evaluation using random assignment.” These terms refer to evaluations with a program group (sometimes referred to as a treatment group) and a control group, where individuals are assigned to each group randomly, meaning essentially flipping a coin.

Apprenticeship as a state and local strategy to enhance skills and careers: An interview with Robert Lerman, Urban Institute and American University – Episode #21

Should states and localities expand the use of apprenticeship as a workforce development strategy? Robert Lerman argues yes. He is an Institution Fellow at the Urban Institute, a professor of economics at American University, and one of the nation’s leading experts on apprenticeship. In 2013, he founded the American Institute for Innovative Apprenticeship.

Today, countries such as Switzerland, Germany and increasingly in Australia and England — along with states such as South Carolina — are using apprenticeships to keep their workforces competitive and to train workers for higher-paying, growing fields. Under apprenticeship programs, as Robert Lerman explains, “individuals earn a salary while receiving training primarily through supervised, work‐based learning but also with related academic instruction. Employers, joint, union‐employer agreements, government agencies, and the military all sponsor apprenticeship programs. Apprentices are employees at the firms and organizations where they are training, and combine productive work along with learning experiences that lead to demonstrated proficiency in a significant array of tasks.”

Also of note, Mathematica Policy Research conducted an effectiveness assessment and cost benefit analysis of registered apprenticeship (RA) in ten states. The 2012 study found that RA participants had substantially higher earnings than did nonparticipants and that the benefits of the RA program appear to be much larger than the costs.

Credits: Music at the end of the interview is by Maya Lerman and her band Maya and the Ruins.