In Horace鈥檚 Compromise, penned nearly four decades ago, author Theodore Sizer famously describes classrooms shaped by a comfortable but dysfunctional agreement: The teacher would pretend to teach, and the students would pretend to learn. The result was a pageant of schooling shorn of the substance of learning.
We can鈥檛 help but wonder if something similar characterizes the relationship between education researchers and education leaders. Researchers pretend to be interested in surfacing useful findings, and those in schools pretend to be interested in using what they鈥檝e learned.
For all the cheery talk of 鈥渆vidence-based practice,鈥 鈥渞esearch-practice partnerships,鈥 and the rest, the truth is that schools have long employed practices with scant empirical grounding. Indeed, the quality of evidence frequently seems a matter of little import, with findings more often used to justify the decisions of school or system leaders than as a way to seriously evaluate them.
Sizer knew there were outlier teachers, those who rejected the compromise he sketched. Similarly, there are outlier researchers and education leaders. One of us (Goldstein) about his experience collaborating on educational evaluations with two recent winners of the Nobel Prize in economics. Those scholars brought skills, savvy, and a genuine curiosity about where the truth lies. Notably, they didn鈥檛 see themselves as part of the education community. This made it relatively painless for them to embrace results that contradicted their assumptions or the expert consensus.
But we鈥檝e found that鈥檚 the exception. Most education researchers face powerful professional incentives to till the same field of study for decades. Along the way, they cultivate relationships, funding, and influence as they ascend within their intimate subfield. This yields predictable brands and tight-knit tribalism. As a result, researchers often wind up studying questions and employing methods calculated to impress their colleagues and funders, regardless of how relevant any of that may be for educators or kids.
We know there are educational leaders and researchers who鈥檝e tired of this happy dance and are ready to partner in pursuit of sometimes uncomfortable truths.
Most education leaders and entrepreneurs seem to have made their peace with this state of affairs. After all, few of those who have created or shepherded schools, interventions, curricula, programs, trainings, or software are all that eager to see years of their life鈥檚 work negated by an evaluation that might conclude, 鈥淣ope, that doesn鈥檛 work.鈥 Not many will willingly subject their ego to that kind of Judgment Day鈥攅specially when it poses an existential threat to reputations and future grants.
Yet some kind of evaluation is frequently a condition of sustained funding. Consequently, what鈥檚 emerged is a cottage industry of friendly evaluators who discreetly look away when practitioners spin null results or negligible gains as wondrous news. Indeed, when the evaluators of a given intervention or school model are frequently architects or enthusiasts of the reform in question, it鈥檇 be surprising if write-ups flatly concluded that it failed. (Jon Baron of the Coalition for Evidence-Based Policy used to document this dynamic, and his successors at delight as he did in skewering press releases announcing research findings that don鈥檛 accord with the actual results.)
We think this is a story not of bad behavior but of professional incentives and human nature. We wind up with another sort of Horace鈥檚 Compromise and another dysfunctional status quo.
We don鈥檛 know of a simple fix to the perverse incentives at work, but we know there are educational leaders and researchers who鈥檝e tired of this happy dance and are ready to partner in pursuit of sometimes uncomfortable truths. For those individuals, we have six suggestions:
- Be clear about what you really want from an evaluation: truth or compliments.
- Make a 鈥減renup鈥 describing what happens if the results are disappointing. Just the act of pondering a negative result can set up a better evaluation. If you have an appetite for hard truths, gauge the risk of actually pursuing them. Talk to your board members or funders. Are they prepared to see uncomfortable findings? Will they crawl away if the results disappoint or will they actively commit to supporting your efforts to learn from the evidence, to essentially redo your model and try again?
- Get a good sense of what you want before seeking a partner. For education leaders, are you seeking hard numbers or something more descriptive? For researchers, are you intent on gauging whether a school or program 鈥渨orks鈥 or are you more interested in learning how it works and might be improved? Knowing this upfront can make it easier to find the right researcher and to ask the right questions.
- Leaders need to deliberately seek out the right researcher. Google your question and words like 鈥渆valuation鈥 or 鈥渞andomized controlled trial (RCT).鈥 Gather the names of a few scholars who鈥檝e studied questions like yours鈥攁nd try to find a couple folks who 丑补惫别苍鈥檛 (remember that researchers can benefit from being outside the in-group bubble). Introductions are great, but cold emails can also work surprisingly well. As you narrow your search, peruse what they鈥檝e published and check out their Twitter accounts. Seek out researchers who seem free from agendas and are willing to question convention.
- Researchers need to find education leaders or entrepreneurs who are serious about R&D and truth-seeking. They should look off the beaten path for those leaders who can demonstrate that they鈥檙e willing to do what it takes to collect useful, reliable data and who can point to a track record of countenancing hard truths, acknowledging what鈥檚 not working, or using data to refine their programs and practices. Finding such partners is easier when researchers cultivate new networks and are open to exploring questions that may stretch beyond their comfort zone.
- Finally, leaders should interview a prospective researcher the way they would an architect, and researchers should scrutinize potential partners the way architects would a client. Think of it as a negotiation. With an architect and client, it鈥檚 a discussion of ideas, constraints, and practical considerations. Leaders need to pose questions to be tackled while understanding that researchers will bring their own queries and expertise on how to find the answers.
There鈥檚 great power in evidence. It can enable us to learn how to better support and educate students. But, as every courtroom drama teaches, evidence is not the same as truth. There have been some notable strides in the methodological sophistication of 21st-century educational research. But it will take much more if we are to trade today鈥檚 research-practice pageantry for something more valuable.