Three state consortia will vie for $350 million in federal financing to design assessments aligned to the recently unveiled common-core standards, according to applications submitted last month to the U.S. Department of Education.
Part of the Race to the Top program, the competition aims to spur states to band together to create measures of academic achievement that are comparable across states.
Two consortia—the SMARTER Balanced Assessment Consortium , which consists of 31 states, and the Partnership for the , or PARCC, which consists of 26 states—will compete for the bulk of the funding, $320 million, to produce comprehensive systems.
Potentially signaling a shift away from the multiple-choice questions that dominated tests in the wake of the No Child Left Behind Act, both consortia would combine results from performance-based tasks administered throughout the course of the school year with a more traditional end-of-the year measure for school accountability purposes.
Read the applications from:
State officials “wanted to make sure that the assessments were actually signaling appropriately the kind of instruction that teachers were expected to engage in and performance students were expected to be able to do,” said Michael Cohen, the president of Achieve, a Washington-based nonprofit group that is a project-management partner for the PARCC consortium. “They didn’t want a bunch of bubble tests to drive instruction.”
Both consortia also plan to administer their year-end assessments via computer. But only the SMARTER Balanced group would use “computer adaptive” technology, which adjusts the difficulty of questions in relation to a student’s responses, as the basis of that year-end test.
‘SMARTER’ BALANCED ASSESSMENT CONSORTIUM (31 STATES)
Procurement state: Washington
Governing states: Connecticut, Hawaii, Idaho, Kansas, Maine, Michigan, Missouri, Montana, Nevada, New Mexico, North Carolina, Oregon, Utah, Vermont, Washington, West Virginia, Wisconsin
Participating states: Alabama, Colorado, Delaware, Georgia, Iowa, Kentucky, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota
Key design elements: Summative assessment will be based on computer-adaptive technology. Results will be coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include two in English/language arts and two in mathematics in grades 3-8 and up to six by grade 11 for both subjects. Tasks will be delivered by computer and will take one to two class periods to complete.
Consortium will support development of optional formative and interim/benchmark assessments that align to performance tasks, and an interface for parents, teachers, and administrators to access information on student progress.
PARTNERSHIP FOR THE ASSESSMENT OF READINESS FOR COLLEGE AND CAREERS (26 STATES)
Procurement State: Florida
Governing states: Arizona, District of Columbia, Florida, Illinois, Indiana, Louisiana, Maryland, Massachusetts, New York, Rhode Island, Tennessee
Participating states: Alabama, Arkansas, California, Colorado, Delaware, Georgia, Kentucky, Mississippi, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina
Key Design Elements: Summative test will be delivered by computer and results coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include three in English/language arts and three in mathematics.
Benchmarks will be designed so that stakeholders can determine whether students at each grade are on track to be prepared for college or careers.
Consortium will support development of interface for parents, teachers, and administrators to access information on student progress.
STATE CONSORTIUM ON BOARD EXAMINATIONS SYSTEMS (12 STATES)
Procurement state: Kentucky
Governing states: Arizona, Connecticut, Kentucky, Maine, New Hampshire, New Mexico, New York, Pennsylvania, Rhode Island, Vermont, Massachusetts, Mississippi
Key Design Elements: Consortium will adapt board examination systems from other countries to align to the common-core standards.
Plans include at least three board examination systems in lower-division high school grades and five in the upper division, in English, math, science, and history, as well as in three career and technical occupational groupings.
SOURCE: Education Week; Consortia Applications
A smaller band of 12 states is the only contender for a smaller, $30 million federal competition to support specific exams aligned to high school grades or courses.
Striking Similarities
The federal competition initially gave rise to six consortia, but they merged into three before the final applications were due on June 23. (“States Rush to Join Testing Consortia,” Feb. 3, 2010.)
Experts familiar with the applications noted the similarities between the two larger consortia.
“They look a whole lot alike,” said Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment and a consultant to officials in both the SMARTER Balanced and PARCC groups. “They started with very different visions and ended up converging.”
For instance, both the PARCC and SMARTER Balanced consortia envision a system that couples a year-end test with several performance-based tasks, or “through-course assessments,” that take place over the course of the school year.
The tasks, the applicants wrote, reflect an emphasis in the competition guidelines and the work of the Common Core State Standards Initiative on measuring students’ ability to synthesize, analyze, and apply knowledge, not merely recall it.
And although both consortia would use some form of selected-response questions on their year-end accountability measures, they underscored that their states would explore the use of “technology enhanced” items that gauge higher-order critical-thinking abilities, rather than rely solely on multiple-choice questions that don’t lend themselves to gauging those skills.
Tapping Technology
In one key difference between the two proposals, the SMARTER Balanced group plans to employ computer-adaptive technology in some of its measures rather than a traditional fixed-form test. A number of states have experimented with adaptive-test technology, but only Oregon now uses it to meet the NCLB law’s annual testing requirements. Proponents say it provides more-accurate estimates of very high- or low-performing students.
“Relatively quickly, the adaptive engine can find questions that are appropriate to a student’s level of performance and get a measurement of precision,” said Joe Willhoft, the assistant superintendent of assessment and student information for Washington. “Without adaptive testing, it’s pretty hard to imagine how you could develop a test that’s long enough for that.”
In addition, the SMARTER Balanced group plans to invest significantly in developing “interim” and formative assessments that would align to the performance-based tasks. Educators would use those tools to gauge student progress and to pinpoint areas of instructional weakness, not for school accountability.
“Our theory is that [a] summative [assessment] alone cannot deliver all the information to have actionable data in the hands of teachers,” said Susan Gendron, the education commissioner in Maine, at the Council of Chief State School Officers’ recent assessment conference in Detroit. “So we will develop interim and formative tools teachers can use to look at learning progressions and where a student is at a given moment on that continuum.”
The main goal of the PARCC group would be in devising an instrument that is used for making judgments and that helps determine whether students are able to succeed in college without remediation, or do well in entry-level jobs. It plans to expend less effort overall on devising the formative-assessment pieces and supports, though it would help educators make use of its instruments and released test items for instructional purposes.
Both consortia also plan to build systems for sharing data with educators, parents, and teachers throughout the school year to help them know whether students are on track to reach benchmarks.
Despite the overall similarities, officials of both consortia said there was no concerted attempt to merge them. They did say, however, that they planned to work together in some areas—such as devising methods for comparing student performance across all the states.
A handful of states, including Alabama, Ohio, and South Carolina, are participating in both of the large consortia.
If both of the major consortia were to win grants, they could expand teacher-scored assessments to a scale not seen before in public education. Though a handful of states have experimented with such scoring, most states discarded the practice in the wake of the NCLB law.
Teachers’ Role
Of the two big consortia, SMARTER Balanced places a stronger emphasis on the teacher scoring of assessments. That group views it as a critical for helping teachers recognize when students show evidence they’ve mastered standards.
Under its proposal, teachers would score the performance events and some open-ended questions, supplemented by “artificial intelligence” computer-scoring software. Teachers also would audit a sample of the graded exams, and they would help score interim or benchmark assessments.
The PARCCs group envisions a similar system combining both computer and human scoring, but it would allow states to determine whether teachers or test vendors would score.
“Given the histories, traditions, and costs [of teacher scoring] in different states, it seemed sensible to leave that up to states to decide or districts to decide rather than to make a uniform decision across the states,” Achieve’s Mr. Cohen said.
Neither group would permit teachers to score their own students’ summative exams. The PARCC application notes that some states may eschew teacher scoring if they choose to use the results from the standardized tests to gauge teacher and principal effectiveness.
High School Competition
Both the SMARTER Balanced and the PARCC consortia seem well poised to receive 20 additional competitive points in the Race to the Top competition for attaining “buy-in” from higher education. Each secured many commitments from public universities to use the results of the consortia’s high-school-level assessments to place students into credit-bearing courses.
The consortium that envisions the most far-reaching changes to the structure of the high-school-to-college pipeline is the sole applicant for the smaller high school assessment competition. The State Consortium on Board Examination Systems seeks changes from the Carnegie unit system for high school, based largely on seat time and credit hours, to one in which students are given options after mastering a performance standard based on a board examination.
States would adopt such exams, choosing from among many international examples. The consortium would help align the exams to the common standards, but would not create brand-new tests.
Participating states have signed memorandums of understanding committing to pilot the system. They must also agree to offer a new diploma as early as the sophomore year to students who pass lower-division exams offered that year.
After passing such exams, students could go directly to open-admission colleges; follow a career and technical education pathway; or stay in high school to pass higher-division exams in preparation for entry into selective colleges.
Marc S. Tucker, the president of the National Center on Education and the Economy, the project management partner for the high school consortium, said the board examination systems come complete with other elements, such as curricula and teacher training, to raise the level of instruction.
“It’s not just the assessment we’re adapting, it’s the entire instructional system,” said Mr. Tucker, whose nonprofit organization works to improve the linkages between education and the workplace.
Before the RTT application deadline, a panel of state career and technical education officials had filed an intent to compete. But that group ultimately decided not to advance a proposal, citing the short application time frame and a lack of state capacity.
The smaller competition was not dedicated to assessment of CTE-related skills and contexts, added Kimberly A. Green, the executive director of the National Association of State Directors of Career Technical Education Consortium, a Silver Spring, Md., membership group.
The competition “still required you to do two academic tests, and nobody could quite figure out how it differed from [the larger competition],” she said. “CTE was a competitive priority, but it was just a small piece.”
Her group and several of the states instead will participate in the CTE component of the high school consortium’s work.
Next Steps
The competition now lies in the hands of the Department of Education. It had planned to award up to two comprehensive assessment-system grants and one high school assessment grant.
There is much enthusiasm for the new efforts, but already some experts have concerns. Mr. Marion of the Center for Assessment, for one, says there isn’t much of a knowledge base to determine how the new performance tasks would work in practice.
For instance, it’s unclear how heavily scores on the performance-based tasks should be weighted in the overall score—a key point that neither of the large consortia addressed in its application.
“We don’t have at this point enough understanding about how to do these through-course assessments and incorporate them validly and fairly into summative judgments,” Mr. Marion said.
Wayne Camara, the vice president for research and development at the College Board, questioned at the Detroit CCSSO meeting whether the fast timeline that the RTT program requires allows sufficient time to field-test and pilot the new tests, which must be fully operational by the 2014-15 school year. Developing and using them too quickly could pose significant risks, he said.
“Research in isolation from scale-up” is what’s needed, he said.
Before then, states must figure out how to reach agreement on outstanding features of the systems.
“Reaching and sustaining consensus among a large number of states, when you get down to details of test design and administration, is not an easy thing to do,” Mr. Cohen said. “We learned that with [the American Diploma Project’s algebra assessments], ... and this is much more challenging.”