The economist Lawrence Mishel, who is the president of the Economic Policy Institute, presented several weeks ago in these pages a critique of recent research on dropout rates. (“The Exaggerated Dropout Crisis,” March 8, 2006.) Its attack on calculations of high school graduation rates that use official enrollment statistics from the U.S. Department of Education is off base on a number of key points.
In his essay, Mr. Mishel mischaracterizes the findings of our previous studies and expresses an extraordinary faith in the ability of the U.S. Census Bureau’s Current Population Survey, or CPS, to do something for which it was never meant: produce accurate calculations of high school graduation rates. He also seems all too willing to accept at face value officially reported state and district statistics that education researchers would, at the very least, approach with a healthy degree of skepticism.
Independent calculations, including ours, put the country’s high school graduation rate near 70 percent overall, and near 50 percent for minority students. Mr. Mishel incorrectly claims that these estimates find that graduation rates have declined recently. In fact, we find that the rates have largely been stagnant, with some signs of improvement in the past few years.
Mr. Mishel argues that these numbers must be incorrect, because the Current Population Survey yields figures closer to 90 percent overall and 80 percent for minority students. Yet it is not possible for us (or anyone else) to directly refute Mr. Mishel’s numbers. Why? Because he released his Commentary in advance of the report upon which his claims are based. (At the time we submitted this response for publication, the report still had not been publicly released.) This, to say the least, is not regarded as best practice within professional research circles. After all, it is hard to respond constructively to something before it exists.
But we have heard similar arguments before.
To produce our estimates, we use officially reported enrollment and diploma counts provided by the National Center for Education Statistics, the statistical arm of the Education Department, in its Common Core of Data, or CCD. The NCES requires states to report data using standardized definitions. If states do not, their data are not reported. This makes the CCD the only comparable source of state-by-state and district-by-district data on public school enrollments and diploma counts currently available.
Although the researchers who place the national graduation rate at around 70 percent use different formulas, they all rely on this CCD enrollment and diploma data. The benefit of using enrollment counts is that it is relatively easy for schools to count the number of students who are in the classroom. Further, since funding is generally allocated on a per-enrolled-student basis, schools have an incentive to know how many students they enroll (this is a large reason why they take attendance every day), and states have an incentive to be certain that the counts are correct. States and school districts should also have a pretty good idea how many diplomas they award, since they do have to print and distribute them, after all.
Why then are the graduation rates from the Census Bureau’s Current Population Survey so much higher than estimates using the population of students and diplomas? A number of factors are probably at play.
Though the CPS is a high-quality survey for answering many questions, its primary purpose is to measure workforce characteristics. Educational outcomes are at best a secondary consideration. The survey relies on self- or secondhand reporting about high school graduation, rather than hard counts. As a household survey, the CPS does not capture members of institutionalized populations, such as prisoners. Mr. Mishel suggests that because the CPS also excludes military personnel, who are more likely to be graduates, we should be less concerned about the bias from underreporting. But two wrongs hardly make a right. And the CPS, like any survey, has difficulty finding and documenting marginalized and disadvantaged people not attached to households, who are more likely to be dropouts.
The CPS is also unable to distinguish between recipients of General Educational Development diplomas and actual high school graduates. In fact, the Education Department was so concerned about the quality of the completion data from the Current Population Survey that it stopped reporting CPS-based breakdowns for diploma vs. GED recipients in its long-running series on high school dropout and completion.
Mr. Mishel points out that graduation rates calculated using the National Education Longitudinal Study, or NELS, are closer to the population-survey estimates than those using the Education Department’s CCD data, such as our own graduation-rate estimates. Conducted by the National Center for Education Statistics, NELS is a highly respected longitudinal survey of a nationally representative group of students who were in the 8th grade in 1988. But surveys like NELS and the CPS can underestimate dropouts because they are more difficult to find and follow throughout the population.
Phillip Kaufman, who was the primary author of previous government calculations of graduation rates using the Current Population Survey, has indicated that such a coverage bias probably exists. Specifically, dropouts are less likely to be reached by sample surveys (that is, they are “undercovered”). In a report for the Harvard Civil Rights Project, Mr. Kaufman estimated that if we made the reasonable assumption that 50 percent of those undercovered by the CPS were dropouts, we would end up with a completion rate of 80.4 percent. If we then further excluded GED recipients from that estimate, we would get much closer to the estimate of a 70 percent graduation rate that we, the authors, and others suggest.
The only way to evaluate the accuracy of an estimate from a sample like the CPS or NELS is to compare it to the broader population the survey attempts to capture. Usually this population is unknown, but in this case it is available from the Education Department’s Common Core of Data. Mr. Mishel makes the extraordinary leap of criticizing the population computations because the results from samples do not agree. Nonetheless, we can perform a few quick checks to get an idea of which numbers better illustrate the world.
Mr. Mishel specifically worries about our estimation of the entering 9th grade class, which he claims must be distorted by the tendency for 9th grade enrollments to be inflated due to students’ being held back in grade. Two of us, Jay Greene and Marcus Winters, make a correction for this “9th grade bubble” that appears to minimize any bias. Using the official Common Core of Data enrollment counts, we estimate that about 3,635,420 students entered the 9th grade in public schools in 1999. According to the Census Bureau—in a number derived from its Current Population Survey—there were 3,892,340 14-year-olds in the nation that June. NCES figures indicate that 835,328 students attended private high schools (in 2001), which, divided by four, suggests that there are about 208,832 9th graders in private schools. If we subtract the private school 9th graders from the 14-year-old population, we are left with a difference between the number of 14-year-olds and our estimated 9th grade entering class of about 48,088 kids, or 1.3 percent. It would seem that the enrollment counts we use are likely to be accurate.
Mr. Mishel’s estimates and arguments do not even pass a very simple reality check.
The NCES also offers another way to ballpark the nation’s high school graduation rate. It compares the number of high school diplomas distributed in a given year to the number of 17-year-olds in the nation. In 2003-04, about 3,062,000 diplomas were given out, in either a public or private school, and there were about 4,087,000 17-year-olds, according to the CPS. This gives a quickly estimated graduation rate of 74.9 percent. Clearly, this number is imperfect, but it is hard to see how it can be reconciled with the graduation rate Mr. Mishel produces using the CPS. Given the number of 17-year-olds, his estimate of a graduation rate in the neighborhood of 90 percent implies that there should be about 3,678,300 diplomas. So Mr. Mishel’s CPS-based math conjures more than half a million diplomas out of nowhere.
Mr. Mishel’s estimates and arguments do not even pass a very simple reality check. According to his Commentary, he finds that 95 percent of white students graduate from high school. We would venture to guess that few observers of the American education system would find such a figure remotely plausible.
To be certain, if the CCD enrollment and diploma counts are inaccurate, we would be quite interested to know it. Mr. Mishel asserts that those counts are unreliable, but offers no more than discredited results based on survey samples with known defects that happen to produce results different from those of our research. In the end, he would have us jettison the independent research base that we and others have built over the past years, and go back to a time when published graduation rates were so unreliable that they provided little information about the quality of our public schools.
We are not willing to turn back and thus lose the progress that has been made. America’s schools, families, and students deserve better.