When I first started looking at education research back in 1994, I saw all sorts of educators’ claims that they really understood how children learned. It didn’t take long to develop a rather strong, though subjectively based, opinion that there actually seemed to be remarkably little scientifically-based research to support claims about the many education fad ideas that were being paraded out in the early days of KERA in Kentucky.
My concerns began to solidify much more solidly following the release of the report of the National Reading Panel (NRP) in 2000. Data collected during that study showed that incredibly small percentages of research papers on education met even minimal standards for rigor. In crucial reading-related research areas like phonemic awareness, phonics instruction, guided oral reading, vocabulary instruction and teacher preparation/comprehension strategies, fewer than five percent of the papers met minimum standards for rigor. The rest were essentially could not be used to prove anything.
To see the details behind this shocking discovery, click the “Read more” link.
The NRP was very carefully organized. The federal Department of Education was not placed in charge. Instead, the NRP was convened by the National Institute of Child Health and Human Development (NICHD). NICHD has sponsored highly scientific research on reading for a number of years and was uniquely qualified to recruit not only top educators, but also leading members of the scientific, medical and psychological communities. NICHD also included parents.
The NRP established a scientifically rigorous set of standards for acceptance of education research papers. The NRP adopted similar requirements to those used in psychological and medical research. Then the NRP started to look at reports on reading – LOTS of reports – covering such areas as phonemic awareness, phonics instruction and text comprehension. The NRP first listed all reports it could find and then retained those reports that met at least minimum requirements for scientific rigor for further use to assemble their findings. That provided an expert-developed set of statistics on the total number of education reports on reading versus the proportion of those reports that met even minimal quality standards.
The numbers, as the table I assembled below shows, are gruesome. The numbers in Table 1 were assembled from one of a series of reports issued by the NRP some time ago and no longer seems to be online. A current version of the report containing much of the same information is found here.
As you can see, in most cases, the proportions of quality in education reports on various topics of importance to the teaching of reading are single-digit figures. That’s all.
To review my earlier comments, in crucial reading areas like phonemic awareness, phonics instruction, guided oral reading, vocabulary instruction and teacher preparation/comprehension strategies, fewer than five percent of the papers met minimum standards for rigor. In one case, the crucial area of teacher preparation/comprehension strategies, only 0.63 percent of over 600 papers in the area were even minimally rigorous.
The shortage of quality research and the smoke screen created by many reports of poor rigor are serious problems that obviously impact the way teacher candidates are taught to instruct reading. All of a sudden, the nation’s problems with relatively stagnant reading results become clear.
The results from the NRP establish a solid, data-based reason for real concern anytime we hear the term “research shows” applied to a push to adopt some sort of education fad or other in the area of reading, at least.
But, the education research problem isn’t restricted to reading only. In more general terms, D. W. Miller wrote in the Chronicle of Higher Education in 1999 that:
“All disciplines produce lax or ineffective research, but some academics say that education scholarship is especially lacking in rigor and practical focus on achievement.”
“Scholars eschew research that show what works in most schools in favor of studies that observe student behavior and teaching techniques in a classroom or two. They employ weak research methods, write turgid prose, and issue contradictory findings. Educators and policy makers are not trained to separate good research from bad, or they resist findings that challenge cherished beliefs about learning. As a result, education reform is often shaped by political whim and pedagogic fashion.”
I guess that covers it nicely. Most likely, when we hear “research shows” concerning an education discussion, the cited research probably doesn’t really show anything because it most likely wasn’t crafted in a manner that would lead to scientific and valid conclusions. And, because as Miller states, “Educators and policy makers are not trained to separate good research from bad,” it is likely that educators claiming that “research shows” don’t even know what is really going on.