Because they are worried and fearful--and because war is so horrible--students should be learning to ask why Winston Churchill insisted, 鈥淲ar is horrible, but slavery is worse.鈥 Why Robert E. Lee reflected, ''It is good that war is so horrible, lest we should love it too much.鈥' They should be learning to ask why, in its enduring cadences, Ecclesiastes says:
School leaders have always needed reliable information on the status of student academic achievement. Information on a global scale is ever more pertinent today, because the school-improvement movement--and its attendant economic implications--are without borders.
Educators also must have public support. Regardless of ideology or instructional philosophy, there is virtual unanimity in the United States for the proposition that our education system needs strong backing in every community.
If information is inspiration, some of this support could come because of IAEP II.
The purpose of the 1991 international assessment, then, is to produce, one year later in March of 1992, a set of reports that will detail each country鈥檚 achievement results, catalog home and classroom factors that affect student learning in the various countries, and describe other relevant behaviors, such as how much homework students do and how much television they watch.
Why bother? Can any assessment account for the differences between a rural classroom in Korea and one in France, or Taiwanese textbooks compared to the learning resources available in a Russian school? Why invest student time, teacher energy, and school cooperation in an international assessment?
Because these 13-year-olds share a planet whose ozone layer is fraying. Theirs will be a world grappling with complex technological issues, acid rain, radioactive waste, untreatable illness, hunger. In 10 years, when they are 23 years old, these youths will be shaping our global environment.
Today, the mathematical and scientific knowledge accumulated by the 105 million 13-year-olds on the earth is a nest egg for the planet.
The project that begins in March will rely on a careful structure and proven techniques. It employs the same sampling procedures in each country. It will present the same test, following the same standardized procedures, and ask the same background and attitude questions. Reports will carefully note the proportion of each country鈥檚 13-year-olds who, for one reason or another, are not represented in the national sample. Each country will develop and follow a quality-control plan approved by Educational Testing Service, the project administrator, to ensure the validity and reliability of the findings. 69传媒 will be randomly visited during the assessment.
Based on samples that represent more than one-fourth of the world鈥檚 13-year-old population and building on tested procedures, IAEP II will generate a status report rich in information on a range of educational activities and outcomes.
In 1987, IAEP I, funded by the National Center for Education Statistics and the National Science Foundation, demonstrated that some of the content and procedures developed for the National Assessment of Educational Progress could be used to improve the efficiency of an international comparative study. With hundreds of mathematics- and science-test questions and a large investment in the methodology of assessment, NAEP was an appealing model for application in this wider sphere.
Data from IAEP I, reported in 1989 in A World of Differences, suggest a number of benefits from this kind of study:
- To those setting standards for student achievement, it is instructive to observe what 13-year-olds in various countries can achieve. Those with the responsibility for setting achievement goals in the United States, for example, should know that in the Canadian province of Quebec and in Korea, more than 70 percent of 13-year-olds have success solving two-step mathematics problems, compared with 40 percent of our students.
These examples suggest how comparative findings can be worthwhile--if the data are valid and reliable and the results can be produced quickly and efficiently.
Thanks to NAEP鈥檚 tested procedures, along with a fair amount of discipline, IAEP I yielded a thought-provoking report in less than three years, compared with previous experiences requiring six or more years. That test also indicated that while many of NAEP鈥檚 data-analysis techniques and reporting procedures 鈥渢ravel well,鈥 the journey for test content, even in mathematics and science, requires extraordinary care.
Comparative statistics, whether economic, medical, or educational, always face legitimate challenges:
- Are the samples truly comparable? They must be independently and rigorously drawn. Each report must clearly identify the ranges of sampling error that influence the reliability of reported statistics, as well as the percentage and the characteristics of each country鈥檚 student population that is represented.
Education policymakers as well as teachers from around the world are searching for tools to help them identify and set reasonable standards. They are seeking with even greater interest to identify the factors that seem to improve the learning environment. Information from a variety of foreign countries, some with environments that closely parallel our own (Canada) and some that differ greatly (China) can yield clues of what is possible, and of strategies that may be helpful.
Inevitably, these data will cause us to reflect upon a range of generally accepted assumptions about the preparation of teachers, the type of learning materials available, the student-teacher ratio, the length of school days and the school year, as well as many societies鈥 values and attitudes about the importance and role of education.
But this kind of information can only be as helpful as its quality will allow. How can we assure that it will be as valid and reliable as possible? How can we be confident that its dissemination will be as accurate and as responsible as we can make it?
In the planning for IAEP II, and as the project has been implemented with the guidance of the National Academy of Sciences鈥 board on international comparative studies in education, the multinational project team has addressed these questions systematically and conscientiously. The great motivator has been the self-interest of each participating country. The expenditure of this much energy, effort, and money would be pointless if the yield were unreliable data.
The results of IAEP II will be as good as current technology allows. Like all survey research, the findings will have limitations. Nonetheless, with reasonable interpretation, they will constitute useful tools for the many professionals charged with the responsibility for finding ways to improve learning.
In the short term, the reports from the test will provide insights into possible achievement targets and how we might improve academic achievement in the United States. They will be useful in spurring greater efforts to support our schools.
In the long run, the assessment techniques polished through projects like IAEP will be repeatedly refined, to the benefit of educators in all nations.
They, in turn, will advance from asking 鈥渨hy on earth鈥 about the testing process itself to understanding 鈥渉ow on earth鈥 each distinctive society prepares its children for their successful contribution to a shared future. That is, educators will learn from each other.
That鈥檚 the bottom line.