Using LouieStat and collaboration across agencies to improve results in Louisville: An interview with Theresa Reno-Weber, City of Louisville – Episode #24

Theresa Reno WeberSince becoming Mayor of Louisville in 2011, Greg Fischer and his team have launched a number of initiatives to strengthen the city government’s ability to improve on results and address challenges that span traditional agency silos. Initiatives include:

  • LouieStat: Modeled after CitiStat and other “Stat” initiatives, LouieStat uses ongoing data-driven discussions between the Mayor’s Office and agency leaders about agency results and ways to improve those results.
  • Cross-functional teams: For issues too big to solve through the LouieStat process, the Mayor’s Office establishes cross-functional teams of city employees (from directors to line employees) to examine root causes using focus groups and other analytic tools and then propose solutions within 8 to 12 weeks — recommendations that are often approved on the spot by the mayor. To support teams’ efforts, the city provides training to team members on topics such as “plan, do, check, act,” lean process improvement, project management, and data collection/analysis.
  • Cross-agency LouieStat meetings: While most LouieStat meetings focus on specific agencies, the city also runs some LouieStat meetings that are focused on cross-agency topics. An example is VAPStat, focused on tackling the issue of vacant and abandoned properties.

To learn more about these efforts, we’re joined by Theresa Reno-Weber, the city’s Chief of Performance Improvement. She was previously a senior consultant at McKinsey & Company and served for ten years in the U.S. Coast Guard.

Web extras: Theresa Reno-Weber shares her advise to cities and other jurisdictions aiming to strengthen their use of data to improve results [click here]. She also describes a set of questions that helped the Fischer Administration guide their broader strategy, including “What is the city government currently doing?”, “How well is city government performing?” and “How do we improve?” [click here]

Note: To see the “leadership lessons from a dancing guy” video referenced by Theresa, click here.

The PerformanceStat Potential: An Interview with Bob Behn, Professor, Harvard Kennedy School – Episode #24

BobBehnBob Behn of the Harvard Kennedy School joins us to discuss some of the insights from his forthcoming [published in June 2014] book, The PerformanceStat Potential: A Leadership Strategy for Producing Results.

PerformanceStat is Professor Behn’s term for the numerous “Stat” initiatives around the nation that, together, constitute one of the most important developments in public management and leadership in recent decades. From CitiStat in Baltimore to StateStat in Maryland to HUDStat at the U.S. Department of Housing and Urban Development to dozens of other examples, the PerformanceStat approach is an accountability and leadership strategy that involves ongoing, data-driven meetings to review performance and discuss ways to improve performance.

Bob Behn is one of the nation’s foremost experts on performance management and on the leadership challenge of improving the performance of public agencies. He is the faculty chair of the Kennedy School’s executive program, Driving Government Performance: Leadership Strategies that Produce Results. He also writes the on-line monthly Bob Behn’s Performance Leadership Report.

Data for decision making in government: An interview with Benjamin Jones, Kellogg School of Management – Episode #23

As management guru Peter Drucker noted, you can’t manage what you don’t measure. Managers need data, in other words, to inform their decisions. But what types of data?

Benjamin Jones joins us to discuss different types of data that can be used to make decisions, including anecdotes, summary statistics, correlations, and the results from experiments (also known as randomized controlled trials). Each type of data has different advantages.

We also explore the difference between “operational” experiments (ones that test how to improve programs or services by comparing different approaches) and “existential” experiments (ones that test whether a program works or not) and hear about why the former are often the more relevant in public policy settings.

Benjamin Jones is an Associate Professor of Management and Strategy at the Kellogg School of Management at Northwestern University and the faculty director of the Kellogg Innovation and Entrepreneurship Initiative. He served as the senior economist at the White House Council of Economic Advisers and earlier served in the U.S. Department of the Treasury.

Using rigorous program evaluation to learn what works: An interview with Robinson Hollister, Swarthmore College – Episode #22

What does the term “counterfactual” mean and why is it important for rigorous program evaluation? What are the advantages of randomized controlled trials (RCTs) over non-experimental approaches to evaluation? And what surprising finding from the National Supported Work Demonstration showed the usefulness of evaluation with an experimental design?

We explore these and other questions with Robinson (Rob) Hollister, one of the nation’s experts on program evaluation, in an interview designed to give program managers and policy officials an accessible introduction to several key evaluation topics.

Professor Hollister is the Joseph Wharton Professor of Economics at Swarthmore College. He is a past winner of the Peter H. Rossi Award from the Association for Public Policy Analysis and Management (APPAM) for his contributions to the field of program evaluation. He has been involved in the design and evaluation of numerous programs in the fields of employment and training, education, welfare reform, health and education. For a more detailed biography, see here.

Web extra: We explore additional program evaluation topics with Rob Hollister in the web extra:

  • An example of an RCT (focused on hormone replacement) that produced more accurate findings than a comparison group study [click here]
  • Why “keep it simple” is useful advice with RCTs [click here]
  • What “fidelity to the model” means and how much emphasis it deserves [click here]
  • The ways in which replication can be useful [click here]

A tip: Evaluators use several terms to describe the same approach, including “randomized controlled trial,” “experimental evaluation,” “evaluation with an experimental design” and “impact evaluation using random assignment.” These terms refer to evaluations with a program group (sometimes referred to as a treatment group) and a control group, where individuals are assigned to each group randomly, meaning essentially flipping a coin.

Apprenticeship as a state and local strategy to enhance skills and careers: An interview with Robert Lerman, Urban Institute and American University – Episode #21

Should states and localities expand the use of apprenticeship as a workforce development strategy? Robert Lerman argues yes. He is an Institution Fellow at the Urban Institute, a professor of economics at American University, and one of the nation’s leading experts on apprenticeship. In 2013, he founded the American Institute for Innovative Apprenticeship.

Today, countries such as Switzerland, Germany and increasingly in Australia and England — along with states such as South Carolina — are using apprenticeships to keep their workforces competitive and to train workers for higher-paying, growing fields. Under apprenticeship programs, as Robert Lerman explains, “individuals earn a salary while receiving training primarily through supervised, work‐based learning but also with related academic instruction. Employers, joint, union‐employer agreements, government agencies, and the military all sponsor apprenticeship programs. Apprentices are employees at the firms and organizations where they are training, and combine productive work along with learning experiences that lead to demonstrated proficiency in a significant array of tasks.”

Also of note, Mathematica Policy Research conducted an effectiveness assessment and cost benefit analysis of registered apprenticeship (RA) in ten states. The 2012 study found that RA participants had substantially higher earnings than did nonparticipants and that the benefits of the RA program appear to be much larger than the costs.

Credits: Music at the end of the interview is by Maya Lerman and her band Maya and the Ruins.

Strengthening evaluation capacity within agencies: An interview with Naomi Goldstein, Office of Planning, Research and Evaluation at the Administration for Children and Families, HHS – Episode #20

For public leaders at the federal, state and local levels who want to strengthen their agencies’ abilities to learn what works and to continually improve performance, building program evaluation capacity within their agencies is essential. But what are the building blocks of that capacity? And why is the relationship between an evaluation office and a program office within an agency so important?

To explore these and other related issues, we speak with Naomi Goldstein, the Director of the Office of Planning, Research and Evaluation within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services. In her role, she advises the Assistant Secretary for Children and Families on improving the effectiveness and efficiency of ACF programs. She is one of the leading experts in program evaluation within the federal government and was awarded the Presidential Rank of Distinguished Executive in 2012.

You may also be interested in reading ACF’s evaluation policy, launched in 2012, which is designed to confirm ACF’s “commitment to conducting evaluations and to using evidence from evaluations to inform policy and practice.”

Web extra: Naomi Goldstein discusses the similarities and differences between program evaluation and performance management. [click here] As a postscript, she commented after our interview about the value of combining typical performance management and evaluation approaches, including how experimental evaluations that use administrative data can produce relatively quick and inexpensive results.

A city’s effort to drive innovation and learning on a priority issue: An interview with Kristin Morse, New York City Center for Economic Opportunity – Episode #19

Kristin MorseThe Center for Economic Opportunity (CEO) is a unit within the Mayor’s Office in New York City. It was launched in 2006 by Mayor Michael Bloomberg to develop new and innovative anti-poverty initiatives and to rigorously test them to see what works. It provides about $100 million annually to primarily city agencies to fund pilot programs. The majority of funds come from the city, with additional support from state, federal and philanthropic sources. Since its launch, CEO has worked with 28 city agencies and over 200 community-based providers to pilot 50 programs. In recognition of its work, it won the 2012 Innovations in American Government Award.

The CEO provides insights into how public leaders can focus attention within government, and within their communities, on particular priority issues (in this case, reducing poverty); test new approaches; and rigorously evaluate the results in order to learn what works, scale up effective programs and stop doing what isn’t working. On the latter point, CEO has terminated about 20% of its programs for inadequate results, while at the same time scaling up several programs that have shown strong results.

To learn more, we are joined by Kristin Morse, CEO’s Executive Director.

Web extra: For brevity, the interview does not cover CEO’s Social Innovation Fund work, but more information is available here. This effort is supporting the replication of CEO’s most promising initiatives, including in eight urban areas in the U.S.

Performance budgeting in Austria: An interview with Gerhard Steger, Austrian Ministry of Finance – Episode #18

Gerhard StegerWith a population about the size of Virginia, Austria may be a relatively small nation, but it provides a prominent example of implementing performance budgeting. In particular, a series of budget reforms in recent years has significantly shifted the federal budget process in Austria from one focused on the question, “How much do we spend?” to one with a much stronger focus on the question, “What results are we producing?”

Specific reforms include multiyear budgeting, the ability of ministries (that is, federal agencies) to keep any savings from cost-cutting or efficiencies, and a performance measurement system including the requirement that each ministry set at least five key goals that are approved by parliament.

To tell us about performance budgeting in Austria, we are joined by Gerhard Steger who is the Budget Director for the Austrian Ministry of Finance.

Learning from innovative businesses about creating a culture of experimentation in government: An interview with Jim Manzi, Author of “Uncontrolled” – Episode #17

Jim ManziJim Manzi is the founder and chairman of Applied Predictive Technologies, a business analytics firm. His 2012 book Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics and Society argues for the usefulness of experimental methods—in other words, randomized controlled trials (RCTs)—for addressing important policy issues, from improving education outcomes to increasing economic growth to reducing crime.

In a review of Uncontrolled in the New York Times, columnist David Brooks writes, “Manzi wants to infuse government with a culture of experimentation.” Brooks also notes: “What you really need to achieve sustained learning, Manzi argues, is controlled experiments. Try something out. Compare the results against a control group. Build up an information feedback loop. This is how businesses learn. By 2000, the credit card company Capital One was running 60,000 randomized tests a year — trying out different innovations and strategies. Google ran about 12,000 randomized experiments in 2009 alone.”

Washington State, a leader in the use of cost-benefit analysis: An interview with Steve Aos, Director, Washington State Institute for Public Policy – Episode #17

Steve AosSteve Aos is the Director of the Washington State Institute for Public Policy (WSIPP). The Institute’s mission is to carry out practical, non-partisan research—at legislative direction—on issues of importance to Washington State. Areas of focus have included education, criminal justice, welfare, children and adult services, health and more. The outcome of WSIPP’s work includes cost-benefit analyses of various policy options so that legislatures can make more informed decisions about cost-effective policies.

Although not discussed in the interview, a related initiative to note that builds on WSIPP’s work in Washington State: The Pew Charitable Trusts and the MacArthur Foundation are currently working with 14 states to develop and strengthen their cost-benefit analysis capabilities, as part of the Results First Initiative.

Web extra: Steve Aos explains how trust between WSIPP and the legislature, build over time, is an important aspect of the Institute’s ability to produce analysis that is valued and used. [click here]

Stay in the loop! Sign up for monthly Gov Innovator podcast updates.
Never display this again