69ý

Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Policy & Politics Opinion

The 2024 RHSU Edu-Scholar Public Influence Scoring Rubric

How the 2024 Edu-Scholar Rankings are calculated
By Rick Hess — January 03, 2024 10 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
  • Save to favorites
  • Print
Email Copy URL

Tomorrow, I’ll be unveiling the 2024 RHSU Edu-Scholar Public Influence Rankings, recognizing the 200 university-based scholars who had the biggest influence on educational practice and policy last year. This will be the 14th annual edition of the rankings. Today, I want to run through the methodology used to generate those rankings.

Given that more than 20,000 university-based faculty in the United States are researching education, simply making it onto the Edu-Scholar list is a noteworthy feat. The list is comprised of university-based scholars who focus primarily on educational questions (with “university-based” meaning a formal university affiliation). Scholars who do not have a formal affiliation on a university website are ineligible.

The 150 finishers from last year automatically qualified for a spot in this year’s Top 200, so long as they accumulated at least 10 “active points” in last year’s scoring. (This gauges current activity and includes all categories except Google Scholar and Book Points, which measure career-spanning influence.) The automatic qualifiers were then augmented by “at-large” additions chosen by the RHSU Selection Committee, a disciplinarily, methodologically, and ideologically diverse group of accomplished scholars. All Selection Committee members had automatically qualified for this year’s rankings.

I’m indebted to the 2024 RHSU Selection Committee for its assistance and want to acknowledge its members: Joshua Angrist (MIT), Richard Arum (UC Irvine), Deborah Ball (U. Michigan), Linda Darling-Hammond (Stanford), Nell Duke (U. Michigan), Donna Ford (Ohio State), Marybeth Gasman (Rutgers), Dan Goldhaber (U. Washington), Kris Gutiérrez (UC Berkeley), Eric Hanushek (Stanford), Shaun Harper (USC), Douglas Harris (Tulane), Carolyn Heinrich (Vanderbilt), Jeffrey Henig (Columbia), Tyrone Howard (UCLA), Thomas Kane (Harvard), Robert Kelchen (UT Knoxville), Helen Ladd (Duke), Marc Lamont Hill (CUNY), Susanna Loeb (Stanford), Bridget Terry Long (Harvard), Tressie McMillan Cottom (UNC Chapel Hill), Ernest Morrell (Notre Dame), Pedro Noguera (USC), Laura Perna (U. Penn), Robert Pianta (U. Virginia), Jonathan Plucker (Johns Hopkins), Stephen Raudenbush (U. Chicago), Katharine Strunk (U. Penn), Carola Suarez-Orozco (Harvard), Ivory Toldson (Howard), Carol Tomlinson (U. Virginia), Jacob Vigdor (U. Washington), Kevin Welner (CU Boulder), Martin West (Harvard), Sam Wineburg (Stanford), Patrick Wolf (U. Arkansas), Yong Zhao (U. Kansas), and Jonathan Zimmerman (U. Penn).

Okay, so that’s how the Top 200 list was compiled. How were the actual rankings calculated? Each scholar was scored in eight categories, yielding a maximum possible score of 200. Scores are calculated as follows:

Google Scholar Score: This figure gauges the number of widely cited articles, books, or papers a scholar has authored. For this purpose, I use each scholar’s “h-index.” This is a useful, popular way to measure the breadth and impact of a scholar’s work. It involves tallying a scholar’s works in descending order of how often each is cited and then identifying the point at which the number of oft-cited works exceeds the cite count for the least frequently cited. For instance, a scholar who had 20 works that were each cited at least 20 times but whose 21st most frequently cited work was cited just 10 times would score a 20. The measure recognizes that bodies of scholarship influence how important questions are understood and discussed. The search was conducted using the advanced search “author” filter in Google Scholar. For those scholars who have created a Google Scholar account, their h-index was available at a glance. For those scholars without a Google Scholar account, a hand search was used to calculate their score while culling out works by other, similarly named, individuals. Points were capped at 50. (This search was conducted on Dec. 11-13.)

Book Points: A search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, and a half-point for co-authored books in which they were not the lead author or for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name.) We did two separate searches, one for “Hardcover” books and one for “Paperback,” and omitted repeats. This enabled us to omit books released only as e-books. While e-books are growing in popularity, few scholars on this list have penned books that are published solely as e-books—and the e-book category frequently picks up reissues of previously printed books. “Out of print” and not-yet-released volumes were excluded, as were reports, commissioned studies, multiple editions of the same book, and special editions of magazines or journals. We only include books written in English. This measure reflects the conviction that the visibility, packaging, and permanence of books gives them an outsized role in influencing policy and practice. Book points were capped at 20. (This search was conducted on Dec. 11.)

Highest Amazon Ranking: This reflects the scholar’s highest-ranked book on Amazon. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name and sorting the results by “Best-selling.” The highest-ranked book was subtracted from 400,000, and the result was divided by 20,000 to yield a maximum score of 20. (In other words, a scholar’s best book had to rank in Amazon’s top 400,000 to earn points.) The nature of Amazon’s ranking algorithm means that this score can be volatile. The result is an imperfect measure but one that conveys real information about whether a scholar has penned a book that is influencing contemporary discussion of education policy and practice. (This search was conducted on Dec. 11.)

Education Press Mentions: This measures the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2023. Searches were conducted using each scholar’s first and last name. Searches included common diminutives and were conducted both with and without middle initials. Because searches occasionally returned results about the wrong individual, we hand-searched the text of each result to ensure the scholar was actually mentioned in the article. For the Chronicle of Higher Education, mentions in the weekly book lists posts are excluded, as are mentions in the “Transitions” column. The number of appearances in the Chronicle and Inside Higher Ed were averaged, and that tally was added to the number of times a scholar appeared in Education Week. (This was done to avoid overweighting the two higher education publications.) The resulting figure was multiplied by five, with total Ed Press points then capped at 30. (These searches were conducted on Dec. 11.)

Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2023. The search was conducted using Google. The search terms were each scholar’s name and university. Using affiliation serves a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to university-affiliated activity. Variations of a scholar’s name (such as common diminutives and middle initials) were included in the search, if applicable. To avoid duplicate-inflated tallies, the number of unique Google results was used. In the rare instances where a scholar shared the same name as another person at their institution, we sampled the search results, calculated what proportion of those results were for the edu-scholar, and adjusted the overall score accordingly. Points were calculated by dividing total mentions by 60 and capped at 25. (This search was conducted on Dec. 14.)

Newspaper Mentions: A ProQuest search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar’s name and affiliation; diminutives and middle initials, if applicable, were included in the results. To avoid double counting the “Education Press” category, the scores do not include any mentions from Education Week, the Chronicle of Higher Education, or Inside Higher Ed. We removed duplicate articles by hand. The tally was multiplied by three, and points were capped at 30. (The search was conducted on Dec. 12.)

Syllabus Points: This seeks to measure a scholar’s impact on what is being studied by today’s college students. This metric was scored using OpenSyllabusProject.org, the most comprehensive extant database of syllabi. It houses over 6 million syllabi from across American, British, Canadian, and Australian universities. This syllabus-points metric measures what gets assigned, which offers a snapshot of how widely a scholar’s work is being read in relevant courses. The search function makes it difficult to score a scholar’s whole body of work, so the result is only for the ubiquity of each scholar’s top-ranked text. The score reflects the number of times that text appeared on syllabi, with the tally then divided by 10. The score was capped at 20 points. (This search was conducted on Dec. 13.)

Congressional Record Mentions: A simple name search in the Congressional Record for 2023 determined whether a scholar was referenced by a member of Congress. Qualifying scholars received 5 points. (This search was conducted on Dec. 11.)

There are obviously lots of provisos when it comes to the Edu-Scholar results. Different disciplines approach books and articles differently. Senior scholars have had more opportunity to build a substantial body of work and influence (for what it’s worth, the results are unapologetically engineered to favor sustained accomplishment). And readers may care more for some categories than others. That’s all well and good. The intent is to spur discussion about the nature of constructive public influence: Who’s doing it, how much it matters, and how to gauge a scholar’s contribution.

A few notes regarding questions that arise every year:

  • There are some academics that dabble (quite successfully) in education but for whom education is only a sideline. They are not included in these rankings. For a scholar to be included, education must constitute a substantial slice of their scholarship. This helps ensure that the rankings serve as something of an apples-to-apples comparison.
  • Scholars sometimes change institutions in the course of a year. My policy is straightforward: For the categories where affiliation is used, searches are conducted using a scholar’s affiliation as checked during the summer. This avoids concerns about double-counting and reduces the burden on my overworked RAs. Scholars do get dinged a bit if they change institutions between spring and fall. But that’s life.
  • Some eligible scholars wind up assuming deanships or serving as university provosts or presidents. The rule is that education school deans remain eligible but that provosts and presidents are not ranked.
  • It goes without saying that tomorrow’s list represents only a sliver of the nation’s education researchers. For those interested in scoring additional scholars, it’s simple to do so using the scoring rubric enumerated above. Indeed, the exercise was designed so that anyone can generate a comparable rating for a given scholar in a half hour or less.
  • This is an imperfect and evolving exercise. Questions and suggestions are welcome. And, if ranked scholars would like to have their names listed differently or have their discipline categorized differently, I’m happy to be as responsive as feasible within the bounds of consistency.

Finally, a note of thanks: For the hard work of coordinating the Selection Committee, assembling the list of nominees, and crunching and double-checking the results for 200 scholars, I owe an immense debt of gratitude to my invaluable research assistants Caitlyn Aversman, Greg Fournier, Anna Coulter, Ilana Ovental, Joe Pitts, and Riley Fletcher.

Related Tags:

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI and Educational Leadership: Driving Innovation and Equity
Discover how to leverage AI to transform teaching, leadership, and administration. Network with experts and learn practical strategies.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
Investing in Success: Leading a Culture of Safety and Support
Content provided by 
Assessment K-12 Essentials Forum Making Competency-Based Learning a Reality
Join this free virtual event to hear from educators and experts working to implement competency-based education.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

States 5 Ways You Didn't Know the Election Will Affect K-12 69ý
Voters will weigh ballot items that affect funding for electric school buses, tax revenue for state education budgets, and more.
8 min read
Pencil drawing a checkmark in a box. U.S.A. ballot measures voting in elections.
DigitalVision Vectors
Law & Courts Court Battles and Presidential Election Have Big Implications for Title IX Regulation
A federal appeals court heard arguments about whether some provisions of the Title IX regulation should be allowed to go into wider effect.
4 min read
Image of a gavel
iStock/Getty
Law & Courts Top Affirmative Action Foe Has New Target: Scholarships for Aspiring Minority Teachers
The legal activist behind the U.S. Supreme Court college admissions decision has now sued over an Illinois minority scholarship program.
3 min read
A picture of a gavel on a target.
Bill Oxford/Getty
Federal From Our Research Center How Educators Say They'll Vote in the 2024 Election
Educators' feelings on Vice President Kamala Harris and former President Donald Trump vary by age and the communities where they work.
4 min read
Jacob Lewis, 3, waits at a privacy booth as his grandfather, Robert Schroyer, fills out his ballot while voting at Sabillasville Elementary School, Nov. 8, 2022, in Sabillasville, Md.
Jacob Lewis, 3, waits at a privacy booth as his grandfather, Robert Schroyer, fills out his ballot while voting at Sabillasville Elementary School, Nov. 8, 2022, in Sabillasville, Md.
Julio Cortez/AP