Traditional methods of studying social-emotional skills will have to evolve in more reliable, less subjective ways if educators and policymakers expect to incorporate them validly into accountability systems and school improvement plans, education researchers meeting here last week cautioned.
The federal Every Student Succeeds Act broadens the definition of school success, requiring states and districts to include nonacademic factors in their accountability systems. Concepts like growth mindset—the belief that intelligence and other skills are not fixed, but can be improved through effort—and grit—the ability to sustain interest over a long period and persist in a task in the face of boredom or challenges—have garnered great interest as potential levers to lift student academic achievement through non-academic skills.
“It’s only in the last five years that mindset has moved from an influential academic theory to an educational phenomenon,” David Miele, an education professor at Boston University, said during a symposium on the research at the American Educational Research Association’s annual conference. The gathering drew more than 16,500 researchers from around the world.
But the research studying the skills hasn’t quite caught up with their rising popularity, some scholars said.
In an analysis of 167 National Science Foundation-funded studies of those skills—including 88 interventions designed to improve such qualities as motivation, self-efficacy, and persistence—Jolene Jesse, a program director in the National Science Foundation’s education and human resources directorate, said the instruments being developed are largely self-reports.
For example, to identify changes in a student’s “grittiness,” a researcher might ask students to rate, on a scale of 1 to 5, how much they agree with such statements as “Setbacks don’t discourage me” or “I finish whatever I begin.”
Beliefs vs. Actions
Such methods are long-established, but in the symposium and several other discussions at the AERA meeting, researchers warned that surveys like these can be muddier and vulnerable to biases.
“Let’s say we were studying students’ math ability,” said Evan Heit, the director of the NSF’s division of research on learning. “How satisfied would we all be if we did that by simply asking students, ‘OK, how good are you in math?’ We probably would not be very satisfied. So should we be using self-reports for [social-emotional learning]?”
For example, Lee Shumow, an educational psychology professor at Northern Illinois University, evaluated how an intervention affected teachers’ and students’ growth mindsets. After conducting teacher training and a six-week curriculum designed to boost growth mindsets among teachers and students, Shumow used self-reported surveys but also observed 10 7th grade and 15 high school science classes.
She measured how often teachers used feedback to students that supported growth mindset—for example, “You did well on this test; see how your studying is paying off?"—and how often they made comments that would undermine a growth mindset, such as comparing students’ test scores or saying, “You aced this test; see how smart you are?”
69ý whose teachers provided more growth-oriented feedback had better performance and higher levels of growth mindset.
During teacher training, the middle and high school teachers participating in the growth-mindset intervention answered survey questions in ways that suggested each had a strong growth mindset.
“We did measure teachers’ mindsets using a standard survey to measure mindset,” Shumow explained, “but teachers are pretty savvy. ... They knew exactly how they were supposed to respond, and they answered [the questions] accordingly.”
And sure enough, during classroom observations, the teachers who were part of the intervention made more comments supporting growth mindset than did teachers who had not taken part in the intervention—but the intervention teachers also made more undermining comments than the teachers in the control group.
“We think the statements the teachers made and the behavior they displayed in the classroom” reflect a conflict between the explicit beliefs they are learning and implicit beliefs that may be more fixed, Shumow said. “We think the implicit beliefs are where it’s at.”
Heit, of the NSF, pointed to a recent Brookings Institution report calling for researchers studying social skills to distinguish better between character traits and the related behaviors that can be trained. Heit and his colleague Jesse called for more observational protocols that would help teachers and researchers understand what grit or a growth mindset looks like in day-to-day practice.
True Grit? It Depends
Even when students or teachers answer surveys frankly, they may be influenced by comparing their own behavior with that of their peers—what researchers call reference bias.
That’s why prior studies have found very high-performing students in competitive schools often report being less hardworking than they actually are.
It can also be why between-school comparisons of grit don’t always work well, according to another study presented at the meeting by University of Pennsylvania psychology professor Angela Duckworth, who coined the term “grit"; mindset researcher David Yaeger at the University of Texas at Austin; and colleagues at Stanford University and the University of Notre Dame.
The researchers compared two types of student surveys of academic persistence with a behavioral test measuring how long high school students would continue to perform a difficult but mundane task while being distracted.
69ý’ own reported perseverance predicted how likely they were to complete their first year of college in comparison with others at their own school. But the student reports were not accurate for comparing the college-persistence rates between one high school and another.
By contrast, performance on the behavior task did accurately predict differences in college persistence both within the high schools and between them.
As children grow to adolescence, Duckworth said, they may compare themselves more to their peers, which may also affect how accurately they report their own persistence, self-efficacy, or self-control.
One of the limitations of using student self-reports to measure grit, Duckworth said, is they are “a judgment based on a mental model that’s influenced by many things other than the objective behavior.”
In fact, in a separate study, Duckworth and postdoctoral researcher Lauren Eskreis-Winkler flipped the grit research structure on its head: They used a survey designed to get students thinking about other, younger students as the intervention to increase students’ own gritty behavior.
In a randomized controlled experiment, the researchers gave 550 middle school students basic information on grit and randomly assigned some to fill out a survey giving “tips” to 4th graders on how to be more persistent and gritty. Then the researchers asked all the students to complete challenging math problems in an online program, but added that they could “take a break” and play simple entertainment games whenever they wanted.
In both cases, students who had acted as “mentors” to other students via the surveys persisted in the math task longer and with fewer breaks than those who had not been mentors. The effect was strongest for students who were initially deemed low-performing in math.
The results have also been repeated with three groups of adults: at-risk community college students, unemployed workers trying to find a new job, and smokers seeking to quit.