Preliminary results from schools taking part in a Chicago program containing performance-based compensation for teachers show no evidence that the program has boosted student achievement on math and reading tests, compared with a group of similar, nonparticipating schools, released last week concludes.
The findings are notable not just for being at odds with other studies of the Teacher Advancement Program model, but also for the Chicago experiment’s unique background: During his tenure as chief executive officer of the district, Arne Duncan—now the U.S. secretary of education—oversaw the development of the initiative.
And through the economic-stimulus program’s Race to the Top and Teacher Incentive Fund competitions, the Obama administration has pushed states and districts to engage in changes to teacher-pay and -evaluation systems. Mr. Duncan has been a vocal supporter of the incentive-fund program, which provided some of the financing for the Chicago project.
The study by Mathematica Policy Research also found that the Chicago TAP, a local version of the national program, did not improve the rates of teacher retention in participating schools or in the district.
The findings from the Princeton, N.J.-based evaluation group cover the first two years of the program’s four-year implementation in select schools in the 409,000-student district.
At least one national independent study of TAP conducted in 2008 using a different research design found positive effects at the elementary level.
The new study offers few overt clues to explain its less-positive findings, and officials from both the district and from the U.S. Department of Education say it is premature to make a final judgment about the program’s impact.
Franklin R. Shuftan, a spokesman for the Chicago school system, noted that the district chose to focus on the professional-development aspect of the complex program, rather than on its bonus-pay feature, during the first two years of implementation.
“The report acknowledges that programs such as TAP take time to change attitudes and alter a school’s culture,” he said. “Measurables such as test scores and teacher retention might be better thought of as longer-term or final outcomes.”
Federal officials took a similar tack. “We know TAP and other reforms are hard work. We can’t expect immediate results,” said Peter Cunningham, a spokesman for the federal Education Department. “That’s why we’re committed to evaluating programs over the long term and identifying ones that deliver the results for children.”
Comparison Group
Seven of the 34 current federal incentive-fund grantees use a form of the TAP model in their programs. In addition to compensation bonuses, the program includes professional development keyed to a teacher-evaluation framework and a career ladder for teachers.
The Chicago program was designed jointly by the Chicago Teachers’ Union and the district, among other partners.
To determine what would have happened in the absence of Chicago’s TAP, analysts compared results from the 16 participating schools with a group of 200 schools with similar student demographics, teacher-retention rates, accountability status, and student-achievement levels.
They found no statistically significant differences in student scores or in teacher-retention rates among those schools.
The program’s impact also did not appear to increase after an additional year of experience. Several participating schools delayed implementation of TAP by a year so that, using a random-assignment methodology, the researchers could discern whether differences between schools with two years of experience in TAP and those with only one year could be attributed to the program.
They discovered that schools with only one year of exposure to TAP actually had slightly higher reading and math test scores, on average, than did the schools in their second year of implementation.
The Chicago findings are difficult to generalize to other TAP sites, as officials there did not adhere to all aspects of the TAP model. For one, the district spread the bonus funding to staff beyond teachers. ( April 1, 2009.)
Because of problems with obtaining student-growth data linked to individual teachers, Chicago also paid bonuses based on schoolwide-, rather than classroom-achievement growth. The National Institute for Excellence in Teaching, which operates TAP, recommends that at least 30 percent of bonus pay be based on the results of classroom measures of student progress.
The study also notes that Chicago’s incentive payouts, which were lower than those in other programs nationwide, were not as heavily emphasized as other components. Payouts for teachers under the program averaged $1,100 for those in schools in their first year of implementation and $2,600 for their second year of TAP. That’s lower than the $2,000 and $4,000 per-teacher target set by the district.
Shortly after the release of the findings, the Santa Monica, Calif.-based institute commissioned an independent group, Interactive Inc,. to help determine why theChicago program didn’t produce results.
The findings could influence the larger field of performance pay, since they are likely to be scrutinized by districts hoping to win a cut of funds in the next round of federal incentive grants, said Steven Glazerman, a senior researcher at Mathematica.
“You have to wonder whether the result would have been different if the payouts had been larger or more meaningfully differentiated,” Mr. Glazerman said.
Guidelines for the next round of federal Teacher Incentive Fundprogramming do not specify the amount of incentive bonuses or how heavily to weigh growth in student test scores. But they do specify that average bonus payouts for educators should be “substantial,” perhaps 5 percent of the average teacher salary, and that top-performing educators should earn far beyond that amount, perhaps three times as much.
Mathematica plans to issue two more reports in coming years as it continues to evaluate the program.