On problems with education research, state education rankings and teacher preparation
I have written several blogs over the past few days about an annual rite of passage in the education world. On January 12, 2012 Education Week released its annual report on education across the United States, known as Quality Counts (subscription).
Kentucky’s education boosters jumped on the new state rankings in the report, which showed that Kentucky moved up 20 places in the Quality Counts state rankings in just one year (Really???).
It’s hard to imagine that so much celebration when Kentucky got an unimpressive “C+” score overall in Quality Counts, but the jump in the rankings does sound impressive (assuming you believe a state can change its education system that much so quickly), until you ask some very basic questions:
What qualities really count in education, and does Quality Counts do a good job of identifying and grading them? For that matter, do many involved with education really know the answers about what REALLY makes up a quality education system?
In my first two blogs (here and here), I talked about Quality Counts using a less accurate formula for graduation rates and how that resulted in Kentucky looking better than it does with a more credible graduation rate calculation.
In the most recent blog, I discussed some disturbing evidence that Quality Counts’ very high ranking for “The Teaching Profession” in Kentucky are sharply at odds with brand new rankings from the National Council on Teacher Quality.
Now, let’s discuss all the ways Quality Counts gets in trouble with its extensive but simplistic use of National Assessment of Educational Progress (NAEP) scores.
Very briefly, Quality Counts does just about everything wrong in its state-to-state NAEP comparisons.
• Quality Counts totally ignores Kentucky’s nation-leading exclusion of students with learning disabilities from the NAEP reading assessments. They never even mention it.
• Quality Counts only examines potentially very misleading overall “all student” NAEP scores and never provides a clue that things might, and do, look very different when those scores are disaggregated by race.
• Quality Counts ignores the fact that all NAEP scores are from a statistically sampled test and are only estimates with a lot of sampling error. Thus, it is possible for one state to somewhat outscore another when in reality the second state performs the same as, or even somewhat better than, the first. Instead, Quality Counts simplistically ranks scores listed to the tenth of a point as though such small score differences are meaningful.
All of these NAEP issues are no surprise to our regular readers. The surprise is that a normally very quality news source would be enmeshed in them.