Three state consortia will vie for $350 million in federal financing to design assessments aligned to the recently unveiled common-core standards, according to applications submitted Wednesday to the U.S. Department of Education.
Part of the Race to the Top program, the competition aims to spur states to band together to create measures of academic achievement that are comparable across states.
Two consortia—the , which consists of 31 states, and the , or PARCC, which consists of 26 states—will compete for the bulk of the funding, $320 million, to produce comprehensive assessment systems.
Potentially signaling a shift away from the multiple-choice questions that dominated tests in the wake of the No Child Left Behind Act, both consortia would combine results from performance-based tasks administered throughout the course of the school year with a more traditional end-of-the year measure for school accountability purposes.
Read the applications from:
;
; and
State Consortium on Board Examination Systems
State officials “wanted to make sure that the assessments were actually signaling appropriately the kind of instruction that teachers were expected to engage in and performance students were expected to be able to do,” said Michael Cohen, the president of Achieve, a Washington-based nonprofit group that is a project-management partner for the PARCC consortium. “They didn’t want a bunch of bubble tests to drive instruction.”
Both consortia also plan to administer their year-end assessments via computer. But only the SMARTER Balanced group would use “computer adaptive” technology, which adjusts the difficulty of test questions in relation to a student’s responses, as the basis of that year-end test.
The consortia also propose to provide participating states with formative-assessment tools and data-management systems to help administrators and parents access student-performance information over the course of the year and to help teachers intervene and adjust instruction as it occurs.
‘SMARTER’ BALANCED ASSESSMENT CONSORTIUM (31 STATES)
Procurement state: Washington
Governing states: Connecticut, Hawaii, Idaho, Kansas, Maine, Michigan, Missouri, Montana, Nevada, New Mexico, North Carolina, Oregon, Utah, Vermont, Washington, West Virginia, Wisconsin
Participating states: Alabama, Colorado, Delaware, Georgia, Iowa, Kentucky, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota
Key Design Elements: Summative assessment will be based on computer-adaptive technology. Results will be coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include two in English/language arts and two in mathematics in grades 3-8 and up to six by grade 11 for both subjects. Tasks will be delivered by computer and will take one to two class periods to complete.
Consortium will support development of optional formative and interim/benchmark assessments that align to performance tasks, and an interface for parents, teachers, and administrators to access information on student progress.
PARTNERSHIP FOR THE ASSESSMENT OF READINESS FOR COLLEGE AND CAREERS (26 STATES)
Procurement State: Florida
Governing states: Arizona, District of Columbia, Florida, Illinois, Indiana, Louisiana, Maryland, Massachusetts, New York, Rhode Island, Tennessee
Participating states: Alabama, Arkansas, California, Colorado, Delaware, Georgia, Kentucky, Mississippi, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina
Key Design Elements: Summative test will be delivered by computer and results coupled with those from performance-based tasks administered over the course of the year.
Performance tasks will include three in English/language arts and three in mathematics.
Benchmarks will be designed so that stakeholders can determine whether students at each grade are on track to be prepared for college or for careers.
Consortium will support development of interface for parents, teachers, and administrators to access information on student progress.
STATE CONSORTIUM ON BOARD EXAMINATIONS SYSTEMS (12 STATES)
Procurement state: Kentucky
Governing states: Arizona, Connecticut, Kentucky, Maine, New Hampshire, New Mexico, New York, Pennsylvania, Rhode Island, Vermont, Massachusetts, Mississippi
Key Design Elements: Consortium will adapt board examination systems from other countries to align to the common-core standards.
Plans include at least three board examination systems in lower-division high school grades and five in the upper division, in English, math, science, and history, as well as in three career and technical occupational groupings.
SOURCE: Education Week; Consortia Applications
A smaller band of 12 states is the only contender for a smaller, $30 million competition earmarked by the Education Department to support specific exams aligned to high school grades or courses.
Similarities and Differences
The federal competition initially gave rise to six assessment consortia, but those consortia merged into three before the final applications were due. (“States Rush to Join Testing Consortia,” Feb. 3, 2010.)
Education Week obtained the three proposals from the consortia in advance of the application deadline, after officials at the Education Department said they could not make the applications immediately available online. The education department also received a fourth application, from a Texas-based organization called Free to Be, but that application listed no states as consortium members, a required eligibility criterion for the competition.
Experts familiar with the applications noted the similarities between the two larger consortia’s submissions.
“They look a whole lot alike,” said Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment and a consultant to officials in both the SMARTER Balanced and PARCC groups. “They started with very different visions and ended up converging.”
For instance, both the PARCC and SMARTER Balanced consortia envision a system that couples a year-end assessment with several performance-based tasks, or “through-course assessments,” that take place over the course of the school year.
Those tasks, the applicants wrote, reflect an emphasis in the federal competition guidelines and in the work of the Common Core State Standards Initiative on measuring students’ ability to synthesize, analyze, and apply knowledge, not merely recall it.
And although both consortia would use some form of selected-response questions on their year-end accountability measures, they underscored that their states would explore the use of “technology enhanced” items that gauge higher-order critical-thinking abilities, rather than rely solely on multiple-choice questions that don’t lend themselves to measuring those skills.
Such abilities might be measured, for instance, by using items that require students to interact with on-screen features, such as a graph.
In one key difference between the two proposals, the SMARTER Balanced group plans to employ computer-adaptive technology in some of its measures rather than a traditional fixed-form test. A number of states have experimented with adaptive-test technology, but only Oregon now uses it to meet the NCLB law’s annual testing requirements.
Joe Willhoft, the assistant superintendent of assessment and student information for Washington, the state applying on behalf of that consortium, said the technology helps meet the federal competition’s stipulation that the common tests cover the breadth and depth of the common standards and provide equally accurate information on both low- and high-achieving students.
“Relatively quickly, the adaptive engine can find questions that are appropriate to a student’s level of performance and get a measurement of precision,” Mr. Willhoft said. “Without adaptive testing, it’s pretty hard to imagine how you could develop a test that’s long enough for that.”
In addition, the SMARTER Balanced group plans to invest significantly in developing “interim” and formative assessments that would align to the performance-based tasks. Educators would use those tools to gauge student progress and to pinpoint areas of instructional weakness, not for school accountability.
“Our theory is that [a] summative [assessment] alone cannot deliver all the information to have actionable data in the hands of teachers,” said Susan Gendron, the education commissioner in Maine, one of the governing states in the consortium, at the Council of Chief State School Officers’ recent assessment conference in Detroit. “So we will develop interim and formative tools teachers can use to look at learning progressions and where a student is at a given moment on that continuum.”
The main goal of the PARCC group would be in devising an instrument that is used for making judgments and that helps determine whether students are able to succeed in college without remediation, or do well in entry-level jobs. It plans to expend less effort overall on devising the formative-assessment pieces and supports, though it would help educators make use of its instruments and released test items for instructional purposes.
Both consortia also plan to build systems for sharing data with educators, parents, and teachers throughout the school year to help them know whether students are on track to reach benchmarks.
Despite the overall similarities in the proposals, officials involved in both consortia said there was no concerted attempt to merge them into just one.
They did say, however, that they planned to work together in some areas—such as devising common benchmarks indicating when a student has met standards and to devise methods for comparing student performance across all the states.
And a handful of states, including Alabama, Ohio, and South Carolina, are participating in both of the large consortia, but aren’t yet part of the governing body of either one.
Scoring Glitches
If both of the major consortia were to win grants, they could expand teacher-scored assessments to a scale not seen before in public education. Though a handful of states have experimented with such scoring, most states discarded the practice in the wake of the NCLB law.
Of the two consortia vying for grants to create comprehensive assessment systems, the SMARTER Balanced one takes a stronger approach toward the teacher scoring of assessments. That group views teacher scoring as a critical part of professional development for teachers that would help them recognize when students’ work products show evidence that they’ve mastered standards.
Under its proposal, teachers would score the performance events and some open-ended questions, supplemented by “artificial intelligence” computer-scoring software. Teachers also would audit a sample of the graded exams, and they would help score interim or benchmark assessments.
The PARCC group envisions a similar system combining both computer and human scoring, but it would allow states to determine whether teachers would participate in human scoring or whether test vendors would do it.
Some states, like New York with its regents exams, have much more experience with teacher-scoring practice than others, Achieve’s Mr. Cohen noted.
“Given the histories, traditions, and costs in different states, it seemed sensible to leave that up to states to decide or districts to decide rather than to make a uniform decision across the states,” he said.
Neither group, however, would permit teachers to score their own students’ summative-exam results.
The PARCC application notes that some states may decide to eschew teacher scoring if they choose to use the results from the standardized tests to gauge teacher and principal effectiveness.
High School Assessments
Both the SMARTER Balanced and the PARCC consortia seem well poised to receive 20 additional competitive points in the Race to the Top competition for attaining “buy-in” from higher education. Each secured many commitments from public colleges and universities to use the results of high-school-level assessments developed by the consortia to place students into credit-bearing courses.
According to the PARCC consortium’s application, 184 institutions of higher education across the states that represent 90 percent of direct-matriculation students have agreed to do so. In the SMARTER Balanced states, 162 institutions signed on, representing 74 percent of direct-matriculation students.
The consortium that envisions the most far-reaching changes to the structure of the high-school-to-college pipeline is the sole applicant for the smaller high school assessment competition.
The State Consortium on Board Examination Systems, a group of 12 states, seeks changes from the Carnegie unit system for high school, based largely on seat time and credit hours, to one in which students are given options after mastering a performance-standard based on a board examination.
States would adopt such exams from among a choice of examples from around the world. The consortium would help align the exams to the common standards, but would not create brand-new tests.
Participating states have signed memorandums of understanding committing to pilot the system in select schools. They must also agree to offer a new diploma as early as the sophomore year of high school to those students who pass lower-division exams offered that year.
After passing such exams, students could go directly to open-admission colleges without needing to take remedial classes; follow a career and technical education pathway; or continue on in high school to pass higher-division exams in preparation for entry into selective colleges.
Marc S. Tucker, the president of the National Center on Education and the Economy, the project management partner for the high school consortium, said that the board examination systems come complete with other elements, such as curricula, to raise the level of instruction.
“What we’re offering is very high-quality assessment that is directly linked to curriculum, directly linked to teacher training,” said Mr. Tucker, whose nonprofit organization works to improve the linkages between education and the workplace. “It’s not just the assessment we’re adapting, it’s the entire instructional system.”
But it would not pave the way to a common curriculum, since states could choose which board examination system to adopt, he added.
Before the RTT application deadline, a panel of state career and technical education officials had filed an intent to compete in the high school competition. But that group ultimately decided not to advance a proposal, according to Kimberly A. Green, the executive director of the National Association of State Directors of Career Technical Education Consortium, a Silver Spring, Md., membership group.
Ms. Green said the officials withdrew in part because of the short time frame for submitting a proposal and a lack of capacity among interested states.
But it was also because the smaller competition was not dedicated to assessment of CTE-related skills and contexts, she said.
The competition “still required you to do two academic tests, and nobody could quite figure out how it differed from [the larger competition],” she said. “CTE was a competitive priority, but it was just a small piece.”
Her group and several of the states instead will participate in the CTE component of the SCOBES consortium’s work.
Next Steps
With the applications in, the competition now lies in the hands of the Department of Education. It had planned to award up to two comprehensive assessment-system grants and one high school assessment grant under the competition.
There is much enthusiasm for the new efforts, but already some experts have concerns. Mr. Marion of the Center for Assessment, for one, says there isn’t much of a knowledge base to determine how the new performance tasks would work in practice.
For instance, it’s unclear how heavily scores on the performance-based tasks should be weighted in the overall assessment score—a key point that neither of the large consortia addressed in its application.
“We don’t have at this point enough understanding about how to do these through-course assessments and incorporate them validly and fairly into summative judgments,” Mr. Marion said. “It’s a good idea, but there is a way to go to put them into practice.”
Wayne Camara, the vice president for research and development at the College Board, said at the Detroit CCSSO meeting that the lack of funding for assessment research in general hampers any effort to develop sound, large-scale new tests. That situation is exacerbated by the fast timeline that the RTT program requires, with tests fully operational by the 2014-15 school year.
He questioned whether the timeline allows sufficient time to field-test and pilot the new tests and cautioned that developing and using them too quickly could pose significant risks.
“Research in isolation from scale-up” is what’s needed, Mr. Camara said.
Even before then, states must figure out how to reach agreement on outstanding details. Mr. Cohen of Achieve conceded that the PARCC proposal is still at “a high level of generality,” and that if the group wins a grant, much work remains to iron out details on the essential features of the assessment system.
“Reaching and sustaining consensus among a large number of states, when you get down to details of test design and administration, is not an easy thing to do,” he said. “We learned that with [the American Diploma Project’s algebra assessments], ... and this is much more challenging.”