The launch of J-PAL North America: An interview with Lawrence Katz, Harvard University – Episode #26

KatzThe Jameel Poverty Action Lab, or J-PAL, was established in 2003 at MIT and is today a global network of researchers who use randomized evaluations to answer important questions within anti-poverty policy. Their mission is to reduce poverty by ensuring that policy is based on scientific evidence and research is translated into action. Their website includes summaries of more than 400 randomized evaluations conducted by members of the J-PAL network in 53 countries.

This year, J-PAL is launching a new initiative, J-PAL North America, to help bring new insights to important social policy questions in the United States and North America. To learn more, we’re joined by Lawrence Katz, a Professor of Economics at Harvard University. He is also one of two Scientific Directors of J-PAL North America, along with Amy Finkelstein of MIT.

The interview is designed to give public leaders an overview of this new resource. In particular, J-PAL can help program managers and other government leaders obtain the technical know-how and the resources (including potential partners with university experts) to use rigorous methods to answer critical policy and program questions. That, in turn, can improve program outcomes and cost effectiveness.

Web extra: Lawrence Katz describes his broader vision for the use of evidence and evaluation in government in the federal, state and local levels and what some important next steps are. [click here]

The launch of J-PAL North America: An interview with Lawrence Katz, Harvard University – Episode #26 Read More »

Using Lean Six Sigma to improve results in government: An interview with Jim Robinson, The George Washington University Center for Excellence in Public Leadership – Episode #25

jim-robinsonContinually improving service delivery is a critical ability for high-performing public agencies at the federal, state and levels — whether it’s innovating to better meet program participants’ needs, increasing efficiency, and solving problems in service delivery. A concept that public managers have borrowed from the private sector to improve service delivery is Lean Six Sigma. It’s a combination of two other management approaches, “Lean” and “Six Sigma.”

As management professor John Maleyeff has noted, Lean Six Sigma “provides a means to improve the delivery of services using a disciplined, project-based approach.” It uses a systematic five step approach called DMAIC. It stands for Define (create problem statement and customer value definition); Measure (map the process and collect associated data); Analyze (identify problems and significant waste); Improve (find ways to eliminate waste and/or add value); and Control (develop implementation and follow-up plan). While those steps are central to the approach, one can use a variety of tools to achieve them, so there is considerably flexibility in one’s approach.

To learn more about the concept and how it can be used in the public sector, we speak with Jim Robinson. He is the Executive Director of The George Washington University Center for Excellence in Public Leadership. He has more than 25 years of experience, particularly in the private sector, working on issues of large-scale organization change and the building of high commitment/high performance organizations.

Using Lean Six Sigma to improve results in government: An interview with Jim Robinson, The George Washington University Center for Excellence in Public Leadership – Episode #25 Read More »

Using LouieStat and collaboration across agencies to improve results in Louisville: An interview with Theresa Reno-Weber, City of Louisville – Episode #24

Theresa Reno WeberSince becoming Mayor of Louisville in 2011, Greg Fischer and his team have launched a number of initiatives to strengthen the city government’s ability to improve on results and address challenges that span traditional agency silos. Initiatives include:

  • LouieStat: Modeled after CitiStat and other “Stat” initiatives, LouieStat uses ongoing data-driven discussions between the Mayor’s Office and agency leaders about agency results and ways to improve those results.
  • Cross-functional teams: For issues too big to solve through the LouieStat process, the Mayor’s Office establishes cross-functional teams of city employees (from directors to line employees) to examine root causes using focus groups and other analytic tools and then propose solutions within 8 to 12 weeks — recommendations that are often approved on the spot by the mayor. To support teams’ efforts, the city provides training to team members on topics such as “plan, do, check, act,” lean process improvement, project management, and data collection/analysis.
  • Cross-agency LouieStat meetings: While most LouieStat meetings focus on specific agencies, the city also runs some LouieStat meetings that are focused on cross-agency topics. An example is VAPStat, focused on tackling the issue of vacant and abandoned properties.

To learn more about these efforts, we’re joined by Theresa Reno-Weber, the city’s Chief of Performance Improvement. She was previously a senior consultant at McKinsey & Company and served for ten years in the U.S. Coast Guard.

Web extras: Theresa Reno-Weber shares her advise to cities and other jurisdictions aiming to strengthen their use of data to improve results [click here]. She also describes a set of questions that helped the Fischer Administration guide their broader strategy, including “What is the city government currently doing?”, “How well is city government performing?” and “How do we improve?” [click here]

Note: To see the “leadership lessons from a dancing guy” video referenced by Theresa, click here.

Using LouieStat and collaboration across agencies to improve results in Louisville: An interview with Theresa Reno-Weber, City of Louisville – Episode #24 Read More »

The PerformanceStat Potential: An Interview with Bob Behn, Professor, Harvard Kennedy School – Episode #24

BobBehnBob Behn of the Harvard Kennedy School joins us to discuss some of the insights from his forthcoming [published in June 2014] book, The PerformanceStat Potential: A Leadership Strategy for Producing Results.

PerformanceStat is Professor Behn’s term for the numerous “Stat” initiatives around the nation that, together, constitute one of the most important developments in public management and leadership in recent decades. From CitiStat in Baltimore to StateStat in Maryland to HUDStat at the U.S. Department of Housing and Urban Development to dozens of other examples, the PerformanceStat approach is an accountability and leadership strategy that involves ongoing, data-driven meetings to review performance and discuss ways to improve performance.

Bob Behn is one of the nation’s foremost experts on performance management and on the leadership challenge of improving the performance of public agencies. He is the faculty chair of the Kennedy School’s executive program, Driving Government Performance: Leadership Strategies that Produce Results. He also writes the on-line monthly Bob Behn’s Performance Leadership Report.

The PerformanceStat Potential: An Interview with Bob Behn, Professor, Harvard Kennedy School – Episode #24 Read More »

Data for decision making in government: An interview with Benjamin Jones, Kellogg School of Management – Episode #23

As management guru Peter Drucker noted, you can’t manage what you don’t measure. Managers need data, in other words, to inform their decisions. But what types of data?

Benjamin Jones joins us to discuss different types of data that can be used to make decisions, including anecdotes, summary statistics, correlations, and the results from experiments (also known as randomized controlled trials). Each type of data has different advantages.

We also explore the difference between “operational” experiments (ones that test how to improve programs or services by comparing different approaches) and “existential” experiments (ones that test whether a program works or not) and hear about why the former are often the more relevant in public policy settings.

Benjamin Jones is an Associate Professor of Management and Strategy at the Kellogg School of Management at Northwestern University and the faculty director of the Kellogg Innovation and Entrepreneurship Initiative. He served as the senior economist at the White House Council of Economic Advisers and earlier served in the U.S. Department of the Treasury.

Data for decision making in government: An interview with Benjamin Jones, Kellogg School of Management – Episode #23 Read More »

Using rigorous program evaluation to learn what works: An interview with Robinson Hollister, Swarthmore College – Episode #22

What does the term “counterfactual” mean and why is it important for rigorous program evaluation? What are the advantages of randomized controlled trials (RCTs) over non-experimental approaches to evaluation? And what surprising finding from the National Supported Work Demonstration showed the usefulness of evaluation with an experimental design?

We explore these and other questions with Robinson (Rob) Hollister, one of the nation’s experts on program evaluation, in an interview designed to give program managers and policy officials an accessible introduction to several key evaluation topics.

Professor Hollister is the Joseph Wharton Professor of Economics at Swarthmore College. He is a past winner of the Peter H. Rossi Award from the Association for Public Policy Analysis and Management (APPAM) for his contributions to the field of program evaluation. He has been involved in the design and evaluation of numerous programs in the fields of employment and training, education, welfare reform, health and education. For a more detailed biography, see here.

Web extra: We explore additional program evaluation topics with Rob Hollister in the web extra:

  • An example of an RCT (focused on hormone replacement) that produced more accurate findings than a comparison group study [click here]
  • Why “keep it simple” is useful advice with RCTs [click here]
  • What “fidelity to the model” means and how much emphasis it deserves [click here]
  • The ways in which replication can be useful [click here]

A tip: Evaluators use several terms to describe the same approach, including “randomized controlled trial,” “experimental evaluation,” “evaluation with an experimental design” and “impact evaluation using random assignment.” These terms refer to evaluations with a program group (sometimes referred to as a treatment group) and a control group, where individuals are assigned to each group randomly, meaning essentially flipping a coin.

Using rigorous program evaluation to learn what works: An interview with Robinson Hollister, Swarthmore College – Episode #22 Read More »

Apprenticeship as a state and local strategy to enhance skills and careers: An interview with Robert Lerman, Urban Institute and American University – Episode #21

Should states and localities expand the use of apprenticeship as a workforce development strategy? Robert Lerman argues yes. He is an Institution Fellow at the Urban Institute, a professor of economics at American University, and one of the nation’s leading experts on apprenticeship. In 2013, he founded the American Institute for Innovative Apprenticeship.

Today, countries such as Switzerland, Germany and increasingly in Australia and England — along with states such as South Carolina — are using apprenticeships to keep their workforces competitive and to train workers for higher-paying, growing fields. Under apprenticeship programs, as Robert Lerman explains, “individuals earn a salary while receiving training primarily through supervised, work‐based learning but also with related academic instruction. Employers, joint, union‐employer agreements, government agencies, and the military all sponsor apprenticeship programs. Apprentices are employees at the firms and organizations where they are training, and combine productive work along with learning experiences that lead to demonstrated proficiency in a significant array of tasks.”

Also of note, Mathematica Policy Research conducted an effectiveness assessment and cost benefit analysis of registered apprenticeship (RA) in ten states. The 2012 study found that RA participants had substantially higher earnings than did nonparticipants and that the benefits of the RA program appear to be much larger than the costs.

Credits: Music at the end of the interview is by Maya Lerman and her band Maya and the Ruins.

Apprenticeship as a state and local strategy to enhance skills and careers: An interview with Robert Lerman, Urban Institute and American University – Episode #21 Read More »

Strengthening evaluation capacity within agencies: An interview with Naomi Goldstein, Office of Planning, Research and Evaluation at the Administration for Children and Families, HHS – Episode #20

For public leaders at the federal, state and local levels who want to strengthen their agencies’ abilities to learn what works and to continually improve performance, building program evaluation capacity within their agencies is essential. But what are the building blocks of that capacity? And why is the relationship between an evaluation office and a program office within an agency so important?

To explore these and other related issues, we speak with Naomi Goldstein, the Director of the Office of Planning, Research and Evaluation within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services. In her role, she advises the Assistant Secretary for Children and Families on improving the effectiveness and efficiency of ACF programs. She is one of the leading experts in program evaluation within the federal government and was awarded the Presidential Rank of Distinguished Executive in 2012.

You may also be interested in reading ACF’s evaluation policy, launched in 2012, which is designed to confirm ACF’s “commitment to conducting evaluations and to using evidence from evaluations to inform policy and practice.”

Web extra: Naomi Goldstein discusses the similarities and differences between program evaluation and performance management. [click here] As a postscript, she commented after our interview about the value of combining typical performance management and evaluation approaches, including how experimental evaluations that use administrative data can produce relatively quick and inexpensive results.

Strengthening evaluation capacity within agencies: An interview with Naomi Goldstein, Office of Planning, Research and Evaluation at the Administration for Children and Families, HHS – Episode #20 Read More »

A city’s effort to drive innovation and learning on a priority issue: An interview with Kristin Morse, New York City Center for Economic Opportunity – Episode #19

Kristin MorseThe Center for Economic Opportunity (CEO) is a unit within the Mayor’s Office in New York City. It was launched in 2006 by Mayor Michael Bloomberg to develop new and innovative anti-poverty initiatives and to rigorously test them to see what works. It provides about $100 million annually to primarily city agencies to fund pilot programs. The majority of funds come from the city, with additional support from state, federal and philanthropic sources. Since its launch, CEO has worked with 28 city agencies and over 200 community-based providers to pilot 50 programs. In recognition of its work, it won the 2012 Innovations in American Government Award.

The CEO provides insights into how public leaders can focus attention within government, and within their communities, on particular priority issues (in this case, reducing poverty); test new approaches; and rigorously evaluate the results in order to learn what works, scale up effective programs and stop doing what isn’t working. On the latter point, CEO has terminated about 20% of its programs for inadequate results, while at the same time scaling up several programs that have shown strong results.

To learn more, we are joined by Kristin Morse, CEO’s Executive Director.

Web extra: For brevity, the interview does not cover CEO’s Social Innovation Fund work, but more information is available here. This effort is supporting the replication of CEO’s most promising initiatives, including in eight urban areas in the U.S.

A city’s effort to drive innovation and learning on a priority issue: An interview with Kristin Morse, New York City Center for Economic Opportunity – Episode #19 Read More »

Performance budgeting in Austria: An interview with Gerhard Steger, Austrian Ministry of Finance – Episode #18

Gerhard StegerWith a population about the size of Virginia, Austria may be a relatively small nation, but it provides a prominent example of implementing performance budgeting. In particular, a series of budget reforms in recent years has significantly shifted the federal budget process in Austria from one focused on the question, “How much do we spend?” to one with a much stronger focus on the question, “What results are we producing?”

Specific reforms include multiyear budgeting, the ability of ministries (that is, federal agencies) to keep any savings from cost-cutting or efficiencies, and a performance measurement system including the requirement that each ministry set at least five key goals that are approved by parliament.

To tell us about performance budgeting in Austria, we are joined by Gerhard Steger who is the Budget Director for the Austrian Ministry of Finance.

Performance budgeting in Austria: An interview with Gerhard Steger, Austrian Ministry of Finance – Episode #18 Read More »

Scroll to Top