A group of states that is designing tests for the common academic standards has taken a key step to ensure that the assessments reflect students’ readiness for college-level work: It gave top higher education officials from member states voting power on test-design questions that are closest to the heart of the college-readiness question.
At its quarterly meeting on April 3, the governing board of the Partnership for Assessment of Readiness for College and Careers, or PARCC, voted unanimously to give members of its advisory committee on college readiness voting power on four issues: how to describe the expected performance levels on the tests, who will set the cutoff scores for the tests, what evidence will be used to decide the cutoff scores, and, crucially, what the cutoff scores will be.
The move puts the highest-ranking officials from one college or university system in most of PARCC’s 24 member states at the voting table, alongside its governing board—the K-12 schools chiefs from each member state—when it comes to the most pivotal questions about crafting tests that reflect college readiness.
One of the two state consortia designing assessments for the Common Core State Standards has decided to give higher education representatives from its leading states voting power in deciding “key matters” related to test design. They are:
- How to describe the expected performance levels on the test.
- Who will set the college-readiness cutoff score for the tests.
- What evidence will be used to decide the cutoff score.
- What the college-readiness cutoff score will be.
SOURCE: Partnership for Assessment of Readiness for College and Careers
Richard M. Freeland, the commissioner of higher education in Massachusetts and co-chairman of PARCC’s college-readiness advisory committee, told the governing board that getting an active voice in the test-shaping process was something “we enthusiastically endorse and are happy to put our energy behind.”
The consortium is “taking a huge step in operationalizing” a definition of college readiness that reflects higher education’s expectations, Mitchell D. Chester, the commissioner of K-12 education in Massachusetts and the chairman of PARCC’s governing board, told the meeting participants.
Support Pivotal
PARCC’s decision illustrates the importance that states are placing on higher education’s embrace of the common-standards tests as proxies for college readiness. Colleges and universities pledged support to the idea. But their willingness to actually use the final tests as proxies for readiness—to let students skip remedial work and go right into entry-level, credit-bearing courses—is considered pivotal to the success of the common-standards initiative, which rests on the idea that mastery of those expectations will prepare students for college study.
“This verges on being historic,” said David T. Conley, an Oregon researcher widely known for his work to define college readiness. “In the U.S., on this scope and scale, it’s unprecedented to have this level of partnership between postsecondary systems and high school on a measurement of readiness.”
PARCC and another group of states, the SMARTER Balanced Assessment Consortium, have $360 million in federal Race to the Top money to design assessment systems for the Common Core State Standards. The standards, which cover English/language arts and mathematics, have been adopted by 46 states and the District of Columbia.
When the U.S. Department of Education offered test-design funding to groups of states, in April 2010, it asked for assessment systems that can serve many purposes. Those include measuring student achievement as well as student growth, judging teacher and school performance, offering formative feedback to help teachers guide instruction, and providing gauges of whether students are ready—or are on track to be ready—to make smooth transitions into college and good jobs.
Leaders of both consortia recognize that much is riding on the support of higher education, since the common-standards initiative rests on the claim that mastery of the standards—and passage of tests that embody them—indicate readiness for credit-bearing entry-level coursework. If colleges decline to use the tests to let students skip remedial work, that could undermine the claim that the tests reflect readiness for credit-bearing study.
That thinking was woven through the Education Department’s initial invitation to the states to band together to design the tests. To win grants in that competition, the consortia had to show that they had enlisted substantial support from their public college and university systems. Both did so.
The Challenge of Consensus
Whether those higher education systems maintain their support for the final tests remains to be seen, however. Skeptics have noted that getting states’ K-12 systems and their diverse array of college and university systems to agree on cutoff scores that connotes proficiency in college-level skills, for instance, will be challenging.
“This cut-score thing is going to be a nightmare,” Chester E. Finn Jr., the president of the Thomas B. Fordham Institute, a Washington think tank, said at an August 2010 meeting of the National Assessment Governing Board, which sets policy for the National Assessment of Educational Progress, or NAEP. “I’m trying to envision Georgia and Connecticut trying to agree on a cut score for proficiency, and I’m envisioning an argument.”
PARCC’s college-readiness committee will not only vote on test-design issues, but it also already plays an active role in the consortium’s strategy to engage higher education colleagues in dialogue about the assessment and enlist their support, PARCC officials said. The consortium’s higher education leadership team, which includes additional college and university leaders, is also playing a leading role in that dialogue and engagement.
The SMARTER Balanced Assessment Consortium’s nine-member executive committee includes two higher education representatives with full voting power: Charles Lenth, the vice president for policy analysis and academic affairs for the State Higher Education Executive Officers, a Boulder, Colo.-based group, and Beverly L. Young, the assistant vice chancellor of academic affairs for the California State University system.
In addition, the consortium has appointed higher education representatives from each member state to provide input into test development and coordinate outreach to colleges and universities in their states. Higher education representatives also take part in 10 “work groups” that focus on key issues, such as psychometrics, technology, and accessibility and accommodations.
The consortium’s governance structure “is designed to ensure input from higher education through representation on the executive committee, collaboration with higher education state leads, and participation in state-led work groups,” said consortium spokesman Eddie Arnold.
Mr. Conley, who advises the SMARTER Balanced group, said it is important to have higher education representatives at the table during test design to create a shared concept of the skills necessary to college success and how to measure those on a test. But he cautioned that those ideas must also have the support of college faculty members—not just their leadership—if the idea of shared standards is to succeed.
Discussion at the PARCC governing board meeting offered hints about the difficulty of getting consensus on critical issues of test design.
Soliciting feedback from board members, Mary Ann Snider, Rhode Island’s chief of educator quality, asked how many performance levels they thought the tests should have: three, four, five, or some other number. Most states voted for four levels, largely mirroring the current practice in most PARCC states. Ms. Snider asked when indicators of being “on track” for college readiness should first appear on test results: in elementary, middle, or high school. Most members voted for elementary school.
She also asked whether the tests should show only how well students have mastered material from their current grade levels, or how well they’ve mastered content from the previous grade level, too. Responses came back deeply divided.
Bumpy Road Ahead
That question attempted to explore an important part of the dialogue about the new assessments: how to design them so they show parents, teachers, and others how students are progressing over time, rather than provide only a snapshot of a given moment. But the prospect of having a given grade’s tests reflect students’ mastery of earlier grades’ content raised some doubts on the board.
“If I’m a 5th grade teacher, am I now responsible for 4th grade content in my evaluation?” asked James Palmer, an interim division administrator in student assessment at the Illinois state board of education.
Gayle Potter, the director of student assessment in Arkansas, said it’s important to give parents and teachers important information about where students are in their learning. But she also said she worried about “giving teachers mixed signals” about their responsibility for lower grades’ content.
Some board members noted that indicators of mastery of the previous year’s content would be helpful in adjusting instruction. But others expressed doubt about whether a summative test was the best way to do that. Perhaps, they said, that function is better handled by other portions of the planned assessment system, such as its optional midyear assessments.