The National Board for Professional Teaching Standards ought to take into account student-learning gains in deciding which teachers are skilled enough to merit receiving its advanced teaching credential, a team of researchers says in a provocative new .
Created in 1987, the board has conferred its credential on nearly 64,000 teachers, and 42 states offer financial incentives to encourage teachers to undergo the lengthy, voluntary certification process. The process draws for the most part on performance-based measures of teaching practice, such as essay responses to pedagogical questions, samples of the written feedback that teachers give students, and videotapes of classroom lessons.
In the new paper, though, researchers from Harvard University, Dartmouth College, and the Los Angeles Unified School District make a case for combining the current measures with newer, “value added” calculations that take into account the test-score gains that students make in applicants’ classes, or at least lending more weight in the assessment process to the individual tests that link most closely to improved student achievement.
“For some reason, the teacher-effectiveness debate is broken into two camps,” Thomas J. Kane, a study author and a professor of education and economics at Harvard’s graduate school of education, said in an interview. “One side focuses on students’ achievement, and then there’s another side that focuses primarily on measures of teacher practice. We think the reasonable approach is not either, but both.”
Experts predict, however, that the group’s proposal will draw opposition from educators worried that the tests measure only students’ factual recall rather than other important educational outcomes, such as their ability to analyze or think critically.
“If you’re skeptical of the [student] tests, you’re going to be skeptical of the test no matter how it’s set up,” said Dan D. Goldhaber, who reviewed the study but does not count himself among those skeptics. “One of the things that’s been politically useful about the national board was that it developed its assessment independent of student achievement and yet it is validated by value-added measures,” added Mr. Goldhaber, a research professor of public affairs at the University of Washington, in Seattle.
Who’s Most Effective?
The paper, which has not yet been published, is attracting attention as much for its research methods as for its recommendations. According to Mr. Kane and others, it is among the first to use random-assignment methodology to confirm whether the national board’s assessment and value-added calculations measure what they’re intended to measure—that is, whether students learn more in classes taught by the teachers rated highly by those systems.
To find out, researchers drew on six years of testing data for elementary students in the 708,000-student Los Angeles Unified School District, which at the time of the study had more nationally certified teachers than any other district in the nation.
District officials helped the researchers identify 99 elementary teachers who had applied for board certification and then matched them up with a comparison group of nonapplicant teachers in the same schools and grades and with similar levels of experience. The schools’ principals were asked to draw up two classrooms of students that they would be willing to assign to either teacher in the matched pair. Then, the researchers randomly assigned the classrooms to each teacher in the pair.
The results showed that students whose teachers got high ratings under the NBPTS assessment system gained significantly more on state exams over the school year than did their counterparts taught by low-scoring teachers. But when it came to comparing the high-scoring group of students with those whose teachers did not even apply for certification, the test-score differences, while still positive, shrank to statistical insignificance.
“Ineffective teachers are just as likely as effective teachers to apply for national-board certification,” said Mr. Kane, “but the board process does seem to provide some information on teachers’ effectiveness, so people who are certified are a little bit better than the average nonapplicant, and unsuccessful applicants are worse than nonapplicants.”
Using four years of prior test-score data from teachers’ classes, the researchers did similar analyses with value-added calculations. They found that, while the overall pattern of results was similar, the value-added analyses did an even better job of predicting which teachers were most likely to produce bigger learning gains among students than the regular national-board measures did.
The researchers also analyzed the 10 parts of the board’s current certification system separately to see which tasks linked most closely to student achievement. They found, for instance, that videotaped lessons were a better predictor of student achievement than samples of teachers’ written feedback. If NBPTS analysts were to recalculate the testing system to give more weight to the tasks most closely related to improved student achievement, they could double the assessment system’s power to predict who the test-score-boosting teachers are, according to the study.
Many Questions
The new , is one of 22 the organization commissioned in 2002 to analyze its program, said Mary E. Dilworth, the NBPTS vice president for higher education initiatives and research. The paper was also financed in part by the Chicago-based Spencer Foundation, which underwrites coverage of research in Education Week. A scientific panel of the National Research Council reviewed all that work for its own report on the national board, which was released June 11 and concluded that nationally certified teachers are more effective than teachers without the credential. (“Credential of NBPTS Has Impact,” June 18, 2008.)
Whether the national board will take Mr. Kane’s advice and incorporate value-added methods into its assessment process is an open question.
“There are a lot of questions about whether these [student] tests are tapping into the things that the national board or any teacher can make a difference in,” said Tony Norman, the associate dean for accountability and assessment at Western Kentucky University’s college of education and behavioral sciences in Bowling Green. He sits on the board’s visiting panel for research. “Can teachers say, ‘I set these goals for my kids and these tests are measuring the goals I set?’ ”
Another challenge, Mr. Norman said, is that teachers applying for certification would have to be told years ahead of any such change if their previous students’ test scores were to be part of the teacher-assessment process.
“We need to spend a little more time looking actually at the assessments that we’re using to gauge student performance,” said Ms. Dilworth.