How states can use “efficacy networks” to test strategies for school improvement: An interview with Tom Kane, Professor, Harvard Graduate School of Education – Episode #142

The new federal education law, the Every Student Succeeds Act (ESSA), emphasizes the importance of evidence, including defining four levels of evidence-based practices. The law, however, leaves it to states to decide how much they want to build an evidence base and how much to nudge districts toward choosing more effective strategies. So what should state education leaders do who want to leverage the new law and encourage districts to learn and do what works for students?
Tom Kane joins us for a two-part series to provide suggestions. In this podcast episode, he discusses how states could use the authority and resources provided by ESSA to launch a system of “efficacy networks,” meaning collections of local agencies committed to measuring the impact of the interventions they’re using. As he notes, “An overlapping system of efficacy networks working with local [education] agencies would create a mechanism for continuous testing and improvement in U.S. education. More than any single policy initiative or program, such a system would be a worthwhile legacy for any state leader.”
He also describes how the Proving Ground initiative run by the Center for Education Policy Research (CEPR) at Harvard is demonstrating the value of having an efficacy network. CEPR is working with 13 school agencies to develop a model to easily conduct low-cost, local pilots.
Tom Kane is a professor of education and economics at the Harvard Graduate School of Education and faculty director of CEPR. His recent article in the journal Education Next is called, “Making Evidence Locally: Rethinking education research under the Every Student Succeeds Act.”
“Many of the most important functions of state and local governments – from building and maintaining roads to housing the homeless – involve contracting for goods and services supplied by the private sector,” notes the Harvard Kennedy School’s
Almost all Federal agencies are lead by a Secretary and a Deputy Secretary. But in 2000, Congress created a new position at the State Department, the Deputy Secretary of State for Management and Resources — in other words, the chief operating officer. In doing so, the Department became the only federal Cabinet-level agency with two co-equal Deputy Secretaries.
How can Federal agencies successfully streamline their support services, such as HR and IT, to boost efficiency and improve results?
What is the value of evidence and data for elected city leaders as well as how can those leaders create a results-focused culture within city government? We get insights from Michael Nutter who served for eight years at the Mayor of Philadelphia, from 2008 to January 2016. Under his leadership, Philadelphia
Public leaders — whether they’re helping run a state agency, a school system, a hospital, a set of Head Start centers or any other organization — are likely to implement changes over time, whether it’s adjusting programs or adding new services. Maybe it’s a new curriculum for students in a school district or new intake procedure for patients in a hospital. Whatever the change, how can those leaders determine if the change is actually effective?
Our focus today is new software, called 
In 2010, the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services launched a project to explore how programs could advance their goals, and address specific challenges, by applying insights from behavioral sciences, including behavioral economics. It is called the 
Programs and agencies in government often exist in silos, where the efforts of one aren’t necessarily connected with others and their data are not shared between them. That slows the process within government of learning what works, coordinating efforts, spurring social innovation, and continuous improvement.
Federal programs produce a lot of data — known as administrative data — and those data can be very useful for program administrators and researchers to answer important questions about policy and practice. That is especially true when data from multiple programs or datasets are linked, producing a broader view of program performance that spans organizational silos.
How can public agencies can use rapid, low-cost experiments to test (and learn from) low-cost, light-touch interventions such as communications and outreach strategies? Also, how can agencies partner with academic researchers to run those experiments and what characteristics of those researcher-practitioner partnerships help make them successful?