In the spring of 2010, Larry Berger, the chief executive officer of Wireless Generation, and Lauren B. Resnick, a professor at the University of Pittsburgh, joined a group of more than 250 educators and policymakers in Washington to discuss a shift in the education landscape—the future of next-generation assessments.
Since the adoption of common-core academic standards in mathematics and English/language arts by all but seven states, attention has turned to how students’ mastery of those standards will be assessed across the country. The process has prompted gatherings such as the National Conference on Next Generation Assessment Systems, hosted by the Educational Testing Service’s Center for K-12 Assessment and Performance Management, where Berger and Resnick presented a paper suggesting improvements to the current examination system.
Berger, whose New York City-based company uses technology to improve K-12 schools, and Resnick, a professor of psychology and cognitive science, outlined a system in which assessments would model instructional methods, testing would be embedded in instruction, exams would be aligned with curricula including common-core standards, and data collection and analysis would be leveraged to build personalized assessments for students.
Educators and researchers alike are exploring how to employ technology to assess students in an authentic, meaningful way by embedding assessments into curricula and “scaffolding” them to help pinpoint where students are struggling. Technology also provides the capability for teachers to receive timely feedback from assessments to inform and adjust instruction based on students’ individual strengths and weaknesses.
Yet even though the subject is often discussed by researchers and policymakers, districts have been slow to embrace and implement new evaluation methods.
The potential is there, however, says Berger, whose company was acquired by Rupert Murdoch’s News Corp. last fall. Berger, who remained as CEO, also serves on the board of Editorial Projects in Education, the nonprofit corporation that publishes Education Week.
Although many schools do not have the technology to administer computer-based, high-stakes summative assessments, says Berger, where assessment technology is beginning to have an impact is for in-class, formative assessments that help complete the feedback loop between assessment and instruction.
Wireless Generation, for example, has software that harnesses the power of wireless devices to record observations of students and receive analyses to better inform teachers’ instruction. Its software is used by 200,000 teachers.
Tara Galloway teaches special education to K-5 students at Ida Rankin Elementary School in Mount Holly, N.C. She uses Wireless Generation’s mCLASS software to assess her students’ reading abilities.
While students complete specific tasks, such as timed readings, Galloway records observations and data on a hand-held device. When the student is finished, the software immediately analyzes the data and shows results to the teacher and the student.
“That instant feedback is important, for [teachers] and the students,” says Galloway. Receiving immediate feedback motivates the students to better understand the material and helps her adjust instruction as she goes, says Galloway. The technology also helps gather all the data in one place so she can see it easily and share it with others, says Galloway.
“I used to do the assessments just to do them, to check it off my list,” she says. “Now I do them because I want to see how the children are doing, and the children want to see how they’re doing, and it changes the way I teach.”
“It’s not that technology is magically evaluating higher-order thinking—humans are, but with technology tools to make them much more efficient,” says Berger, from Wireless Generation.
The time between sending data out to be analyzed and getting it back to be used by the teacher is shrinking, adds Resnick. By using new technologies that allow complex data analysis to occur online rather than be hosted by the school, teachers are able to receive feedback about their students’ performance quickly.
Making sure that assessments are aligned with curricula so that students and teachers have a good idea of what the questions on the exam will cover beforehand is essential to moving assessment forward, Resnick says.
“There should be very little surprise [when students take the exam],” she says.
Familiarizing students with academic concepts and assessment questions before taking a test is an important consideration for educators, says Daniel T. Hickey, an associate professor of counseling and educational psychology at Indiana University in Bloomington.
“We don’t know what context these students will be using [the knowledge teachers are teaching them] in. We can be pretty sure that the context doesn’t even exist right now,” he says. Consequently, he emphasizes, it’s essential that students know how to use the concepts they’re taught in different contexts.
Pinpointing Student Knowledge
Since 2007, researchers at the Princeton, N.J.-based ETS, the giant nonprofit assessment research-and-development organization that makes such tests as the sat, have been using technology to create a new kind of assessment. Such assessment is intended to pinpoint what students know, help students and teachers plan and adjust instruction, and be a learning experience in and of itself.
It is called the , or CBAL.
The assessment leverages technology to scaffold the testing, thereby helping the instructor determine exactly where student misunderstanding occurs, says Paul Deane, a principal research scientist for the ETS.
For example, to assess students’ skills at writing a persuasive essay, CBAL requires students to complete a multistep process before writing the final essay, he says.
The students must familiarize themselves with the pro and con arguments on a topic, outline the main points, and dissect the arguments to show how they work. If a student is not successful on this first series of steps, he or she is unlikely to receive a high score on the final essay, Deane says. The score should indicate to the teacher that the student might be struggling with reading or vocabulary, for example.
Next, students are asked to complete a few more tasks, including determining whether certain evidence weakens or strengthens their arguments and crafting rebuttals to flawed arguments, before attempting the first drafts of their own essays.
The step-by-step method of assessing students turns the experience into one that they can build on and learn from, Deane argues. And the assessment should provide valuable information for the teacher as well, he points out.
“It gives [the teacher] a library of models that are supposed to do as much as possible to exemplify good instructional strategies and useful ways of gathering information about the student,” he says, adding that since the assessment is administered by computer, keystroke data during the exam can give an inside look into a student’s thought process.
“We can use process data to get additional information about what is happening,"he says, such as how often students type, how often they pause, and how they edit their essays.
CBAL has been piloted in 7th and 8th grades in more than 25 schools in 17 states, and more than 8,000 assessments have been administered.
“I think we’re certainly significantly far along with respect to delivering assessments like this for formative purposes,” says Randy Bennett, the Norman O. Frederickson chairman in assessment innovation for the ETS. “The bigger challenge is in the scaling-up to summative assessments that have consequential purposes.”
The lack of technology as well as a lack of funding pose a challenge to making CBAL mainstream, Bennett says.
“Assessments like these are significantly different and more costly because they are new,” he says. “The efficiencies of scale are not yet there.”
Examining E-Portfolios
In the meantime, some teachers are using technology tools to create performance-based student assessments, such as e-portfolios.
Helen Barrett, a former professor at the college of education at the University of Alaska Anchorage, has spent the past 20 years researching strategies and technologies for e-portfolios. Such portfolios provide a collection of student work and require students to reflect on their work and progress.
“What we want to do is help learners not only be much more aware of their own skills and competencies as they relate to standards or a rubric, but also to be able to reflect and write on that,” Barrett says. “An e-portfolio should be more of a conversation about learning than a one-way presentation about learning.”
Having students take ownership of their portfolios is essential to maximizing the potential of the evaluation, says Barrett.
“We need to get students intrinsically motivated about developing the portfolios,” she says. “It’s not the kind of routine assignment where teachers tell them what to put into it and what to write.”
E-portfolios provide students an opportunity to beef up their self-assessment skills and become more familiar with different types of technology, Barrett adds. 69ý can embed videos and images in their e-portfolios, and they can use blogs or podcasts to reflect on their work.
Mobile devices add another dimension to e-portfolios, allowing students to reflect “at the moment the learning takes place,” Barrett says.
Embracing e-portfolios brings a level of authenticity to the assessment that students typically do not experience, says G. Alex Ambrose, an academic adviser at the University of Notre Dame and the founder of EdVibes, an ed-tech consulting firm.
69ý can go on to use what they’ve gathered in e-portfolios to apply to college or use in a job interview, says Ambrose, making the portfolio meaningful beyond the school walls.
Most K-12 schools, however, have not used e-portfolios to evaluate student performance, he says, partly because of “the culture of the school from the administration to the parents. They’re just not ready for the technology.”