69ý

Assessment

Statistics Agency Gauging State ‘Proficiency’ Thresholds

By Sean Cavanagh — November 22, 2006 4 min read
  • Save to favorites
  • Print
Email Copy URL

The statistical arm of the U.S. Department of Education is conducting a study to see how states’ definitions of student academic proficiency compare with the way it is spelled out by the prominent national test known as “the nation’s report card.”

The goal of the study is to use that test, the National Assessment of Educational Progress, as a “yardstick” to judge the different state proficiency thresholds, said Mark S. Schneider, the commissioner of the National Center for Education Statistics, which is overseeing the project.

“For the first time, we would have a way of comparing states’ proficiency standards,” Mr. Schneider said in an interview. “Everybody’s been wanting something like this,” he added. “I’m very anxious to get this out the door.”

The proficiency levels states set on their individual assessments would be judged against those set in 4th and 8th grade reading and mathematics for NAEP, Mr. Schneider said. NCES officials hope the study is complete and ready for public release by March or April, he added.

Improving student proficiency in reading and math has emerged as a dominant national theme in K-12 education in recent years. Under the federal No Child Left Behind Act, which was signed into law in 2002, states are required to test students annually in grades 3-8, and at least once in high school, in those subjects. Scores on each state’s tests are used to evaluate whether schools and districts have made adequate yearly progress toward the goal that all students be deemed “proficient” by 2014.

But the content of state tests and the standards states use to determine which students are proficient vary enormously. In addition, the discrepancy in the percentages of students whom states report as having achieved the target is enormous, leading critics to question the reliability of states’ tests as measures of academic progress.

Partly because of those inconsistencies, NAEP results are scrutinized by educators and policymakers as a uniform benchmark against which students from every state can be judged. States are required to have a sample of their students participate in NAEP every two years in 4th and 8th grade reading and math to receive federal education funds.

Aside from proficient, NAEP’s other achievement levels for student performance are “basic” and “advanced.” NAEP standards for proficiency are widely regarded as more difficult than many of those set for state tests.

Mr. Schneider discussed the goals of the proficiency study at the quarterly meeting of the National Assessment Governing Board, on Nov. 17. The 26-member board sets NAEP policy.

Wesley D. Bruce, the assistant superintendent for assessment for the Indiana education department, said he first heard of the research project at a meeting of the Education Information Management Advisory Consortium last month. That organization, which Mr. Bruce chairs, advises the Washington-based Council of Chief State School Officers on school data issues.

Wide Scrutiny

Mr. Bruce predicted that the federal study could draw widespread attention and provide state officials with useful information. But he also said differences between the percentages of students who achieve proficiency on NAEP and those who meet that threshold on state tests could be explained partly by the different content presented at each grade level. Such explanations, however, are likely to be lost on the public when the study is released, he said.

“The concern is the rush to judgment,” Mr. Bruce said. The public might look at the results of the study and think, “If you’re not right at the NAEP level, you got it wrong,” he said.”

Mr. Schneider, who was nominated as NCES commissioner by President Bush last year, said his agency’s work on the study began before he took the job, though he has taken an interest in the project. The NCES is one of three research centers housed within the Institute of Education Sciences that Congress created in 2002 in reorganizing the Education Department’s research operations.

Mr. Schneider, who is scheduled to serve as commissioner until his term expires in 2009, said the study was likely to draw broad scrutiny. “People are going to have to look at this,” he said. They will “look at the models, look at the methods, and look at the results.”

The lead researcher is Henry I. Braun, who holds the title of distinguished presidential appointee at the Educational Testing Service, the giant nonprofit research and testing organization in Princeton, N.J. His work will build on earlier NCES-sponsored research on the alignment between NAEP and state proficiency standards, Mr. Schneider said.

The study is based partly on a comparison of the cutoff scores states use on tests to judge whether students are deemed proficient against the corresponding scores used by NAEP, Mr. Braun said. The research examines how well populations of students who are deemed proficient in individual states fared on the national assessment.

Although he believes the study is likely to spark debates among policymakers and the public, Mr. Braun also said he hoped it would advance research into the various standards used by states.

“There’s a basic uncertainty that can never be eliminated by statistical methods,” he said. The study, he added, “is the most reasonable way we can think of to [examine] state proficiency standards on a purely statistical basis.”

Related Tags:

A version of this article appeared in the November 29, 2006 edition of Education Week as Statistics Agency Gauging State ‘Proficiency’ Thresholds

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI and Educational Leadership: Driving Innovation and Equity
Discover how to leverage AI to transform teaching, leadership, and administration. Network with experts and learn practical strategies.
Content provided by 
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
Investing in Success: Leading a Culture of Safety and Support
Content provided by 
Assessment K-12 Essentials Forum Making Competency-Based Learning a Reality
Join this free virtual event to hear from educators and experts working to implement competency-based education.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Assessment Opinion 'Academic Rigor Is in Decline.' A College Professor Reflects on AP Scores
The College Board’s new tack on AP scoring means fewer students are prepared for college.
4 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Assessment Opinion 69ý Shouldn't Have to Pass a State Test to Graduate High School
There are better ways than high-stakes tests to think about whether students are prepared for their next step, writes a former high school teacher.
Alex Green
4 min read
Reaching hands from The Creation of Adam of Michelangelo illustration representing the creation or origins of of high stakes testing.
Frances Coch/iStock + Education Week
Assessment Opinion Why Are Advanced Placement Scores Suddenly So High?
In 2024, nearly three-quarters of students passed the AP U.S. History exam, compared with less than half in 2022.
10 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty
Assessment Grades and Standardized Test Scores Aren't Matching Up. Here's Why
Researchers have found discrepancies between student grades and their scores on standardized tests such as the SAT and ACT.
5 min read
Student writing at a desk balancing on a scale. Weighing test scores against grades.
Vanessa Solis/Education Week + Getty Images