Teacher-training exams have been subject to many criticisms: that there are too many of them, that their content isn’t relevant, and that their costs are objectionable. New data open a further avenue for criticism: They’re too easy.
Every state sets the passing score on its teacher-licensing tests below the mean score of the pool of test-takers, according to a federal analysis released recently, suggesting that the exams pose little challenge to many of the individuals taking them.
The data confirm a 2012 Education Week analysis showing similar gaps in a sample of states.
Released in an annual report issued this month by the U.S. Department of Education, the data compare the average passing scores on each state’s teacher exams against the average performance of candidates taking those tests. A clear pattern emerges of tests that, on the whole, most teachers pass partly because of where states set the bar, even as multiple groups call on states to institute policies to recruit academically stronger candidates.
Excluding the U.S. Virgin Islands, gaps between cutoff scores and the average score of test-takers range from a low of 10.1 points, in Arizona, to 22.5 points, in Nebraska. For the nation as a whole, the average certification-test passing score is set nearly 15 points below the mean score of candidates.
The gaps aren’t strictly comparable from state to state, because of differences in the subjects and certification fields tested and in the tests’ scales. Many states use the Princeton, N.J.-based Educational Testing Service’s Praxis series for their licensing tests, and others use state-specific exams designed by Evaluation Systems Group, a Pearson entity based in Hadley, Mass.
States also administer the tests at different points, typically requiring candidates to pass one before entry to a program and others before granting a candidate a teaching certificate. Finally, test content varies, though the exams often measure knowledge beneath the college level.
The federal data represent test-taking from the 2009-10 year.
New Rules
The analysis was made possible by new provisions in federal law. In its 2008 rewrite of the Higher Education Act, Congress directed states to begin reporting both the passing rate and the average scaled score of all test-takers on each teacher examination. (A scaled score is raw performance on the exam translated to its scale, which is used to facilitate year-to-year comparisons.)
States and individual higher education institutions are required to publish that information, and many other details, on teacher preparation on annual “report cards” to the public.
Though licensing-test cutoff scores have, in general, risen in recent years, states have instituted many more tests to meet requirements in the Elementary and Secondary Education Act to staff all core-content classes with a “highly qualified” teacher—one who demonstrates subject-matter knowledge, among other things.
Some officials say the gaps are expected because the tests aren’t meant to do more than prevent the weakest candidates from teaching.
“These tests are a measure of minimum content knowledge. They’re not designed or validated to say that if you score significantly higher, you’re going to be a better teacher,” said Phillip S. Rogers, the executive director of the National Association of State Directors of Teacher Education and Certification, in Washington, which represents individuals who direct licensing or sit on states’ teacher-standards boards. “What it does mean is that the person who passes the test has the minimum content knowledge that the jurisdiction thinks is necessary.”
The Education Department report notes that the gaps could be the result of other factors, too.
“It is also possible that a small gap ... signals relatively low-performing test-takers and a large gap signals relatively high-performing test-takers,” it says.
It is not possible to know the relative difficulty of the exams without knowing the spread of scores on the tests’ scale—such as what percent of test-takers scored at the bottom quartile. States do not have to report that information on their report cards.
Seeking Comparability
But teacher-educators said the report does raise questions about how to make the data more transparent and comparable.
“Cut scores on state teacher-licensing tests do vary widely across states, and we need more consistency,” said Sharon P. Robinson, the president of the American Association of Colleges for Teacher Education, a Washington group that represents about 800 institutions.
The group supports efforts to establish common cutoff scores across states, which would “provide consistency across the profession in terms of expectations for candidates’ performance on these exams,” Ms. Robinson said. “However, multiple-choice and selected-response tests will not answer the most essential question: ‘Is a new teacher ready for the job?' "
AACTE has been working with a Stanford University center and about half the states to pilot an exam that purports to measure teacher-candidates’ classroom readiness, based in part on their student-teaching performance.