Higher education standards won’t guarantee better schools

One of our correspondents raised an interesting issue back before Christmas – namely, there doesn’t seem to be much of a relationship between those states that have higher standards for their state tests and actual academic performance in the schools in those states.

I decided to take a look at that in a new way, comparing improvement on the National Assessment of Educational Progress (NAEP) mathematics assessments to a new analysis of the relative difficulty of various state standards.

What I found was very interesting. This graph shows what I found:


All of the correlations figures shown are low, meaning that across the country, at least between 2003 and 2009, there hasn’t been much of a correspondence between having higher state academic standards and improvement in mathematics.

Interestingly, the correspondence is even weaker for blacks than for whites. In fact, the data indicate that, across the country, eighth grade blacks receive essentially no benefit from living in states with higher math standards on their own state tests.

While there are limitations to this study, one message seems fairly clear: Those who pushed an early idea in KERA that testing would largely drive reform all by itself were wrong. It didn’t work out that way in Kentucky, and this new little study says it isn’t working out that way in general across the country, either. While testing is important, it isn’t a final answer. Testing standards can be overshadowed by such things as inefficient use of resources, restrictive union practices, inadequate teacher preparation, inadequate teacher professional development and a number of other issues we have discussed extensively in this blog. Schools clearly need other stimuli, perhaps including the spur of competition that school choice can create, to make real improvement that reaches ALL students.

For more details and study limitations, click the “Read More” link.

State standards analysis

Recently, the National Center for Education Statistics (NCES) sponsored research that maps the difficulty of state test proficiency standards to an equivalent ‘cut score’ drawn from the National Assessment of Educational Progress’ scoring scale. The latest report on this research is titled “Mapping State Proficiency Standards Onto NAEP Scales: 2005-2007.”

Table 2 in this report shows, for example, that as of 2007, the Kentucky Core Content Test grade 8 equivalent score for proficiency is 279 on the NAEP Math Scale while Massachusetts – a state widely acknowledged to have high standards and performance – has an equivalent NAEP proficiency score of 302. The same table shows Kentucky’s grade 4 proficiency level score on the NAEP would be 229 while Massachusetts again has a much higher standard with an equivalent proficiency score on the NAEP scale of 254.

Compare to NAEP improvement over time

For my comparison data of state improvement in math, I used the change in NAEP math scale scores between 2003 and 2009. I downloaded those scores by race from the NAEP Data Explorer tool.

I picked 2003 for the early set of test scores as this was shortly after the passage of No Child Left Behind and was the first year that all states had to participate on NAEP math assessments.

I did separate analyses by race for the two dominant racial groups in Kentucky because comparison of overall NAEP scores for all students has become problematic due to strong demographic shifts that occurred in the past decade in some, but not all, states. Those demographic shifts tend to hide real performance changes in states such as California, where whites now form a small minority in public school classrooms. In comparison, some states like Kentucky still have very high white populations in their public schools.

After breaking the data down by race, I then ran a correlation calculation in Excel comparing the relative difficulty of each state’s NAEP proficiency rate cut score to the state’s change in NAEP math scores over time. A correlation calculation returns a number between minus one and plus one. If there is a strong correspondence between two sets of data, the number tends to be close to plus one. If there is no correspondence, the number tends to be close to zero.

Of course, there are individual state exceptions

For example, both whites and blacks in Massachusetts rank at the top for their 2009 NAEP grade 8 scale scores.

And, Kentucky’s blacks actually do somewhat better compared to blacks in other states than do Bluegrass State whites. The state’s eighth grade whites tie for a rather dismal rank of 41st place out of 47 states for 2009 NAEP math scores among those states that had usable data for my study, while the state’s blacks ranked 28th out of the 38 states with usable data for that racial group.

Also, I didn’t account for the considerable sampling errors in the both the NAEP scores and in the estimates of the state test NAEP equivalent cut scores. Those plus or minus sampling errors tend to make real differences even harder to detect.

Another unaddressed issue is the variation in exclusion of students on NAEP from state to state, which corrupts the scores in ways that have never been fully determined.

A spreadsheet with more details is on line here.

Speak Your Mind

*