It’s been four years since Senate Bill -130 from the 2006 Regular Legislative Session brought us quality tests from the ACT, Incorporated – EXPLORE for eighth grade students, PLAN for tenth graders and complete testing of all eleventh grade students with the ACT college entrance test.
One of the nice features of the new tests is that they offer “Benchmark Scores.” Reaching those scores indicate a student has relatively good odds of passing the first related college course.
Those ACT Benchmarks were not created out of thin air. They were developed from a canvass of colleges that use the ACT to determine which score best corresponded to a 50 percent chance of getting a “B” and a 75 percent chance of getting a “C” in the first related college course. The ACT Benchmarks certainly look like credible indicators, ones which students and parents should carefully consider while considering that some colleges are more demanding, and some less, than the average performance requirements represented by the Benchmarks.
Recently, an education advocacy group has raised questions about the ACT Benchmarks. I thought that group’s analysis wasn’t well done; so, I have been looking at what is going on.
I first determined the number of high school graduates who scored at or above the ACT Science Benchmark score of 24. Then, I compared that to the sum of the Associates degrees and Bachelors’ degrees in Science, Technology, Engineering and Mathematics (STEM) awarded to the same group of students. For my comparison, I added the number of Associates degrees awarded two years after the class graduated from high school to the number of Bachelors’ degrees awarded four years after the group graduated from high school.
Here is what I found.
As you can see, the agreement between ACT Science Benchmark performance and the later award of STEM degrees is quite good. The difference of about 800 students or so each year amounts to only around 10 percent of the number of STEM degrees awarded.
As education statistics go, this is a very good correspondence, especially when you consider that the ACT Benchmark is only intended to indicate a student’s likelihood of passing the first freshman biology course, not how likely the student is to do all the other work required to get a STEM-area degree.
By the way, the STEM degree award totals above include only those graduates from public colleges who are from Kentucky. I eliminated the degree awards to foreign and out-of-state students. Even so, the degree award numbers in the graph are still a bit too high because the Kentucky Council on Postsecondary Education, which is where I got the degree data, does not have information on the number of STEM degrees the independent colleges in Kentucky award to out of state and foreign students.
I need to mention that my analysis is far from ideal. The best way to do this sort of study is with high accuracy longitudinal data that tracks each student through the educational system. Unfortunately, a system that could support such detailed studies is only now being created in Kentucky. Thus, the analysis above is probably about the best we’ll see for some years to come.
In any event, despite what you may have read recently elsewhere, it looks like the ACT Science Benchmark provides valuable information for Kentucky’s students. The information even has a surprisingly good correspondence to the later award of STEM degrees.
Certainly, the 2006 legislation that brought 100 percent testing of all our students with the ACT’s EPAS system, including EXPLORE for the eighth grade, PLAN for the tenth grade, and the ACT itself for all eleventh grade students was a major advance for Kentucky.
Without question, the data from these tests look far more valuable than anything parents, students and teachers ever got from our old CATS assessment.
Sadly, it looks like not everyone wants to agree about the value of the ACT Benchmarks. Recently, the Prichard Committee for Academic Excellence’s blog took a swipe at the ACT’s Science Benchmark scores.
That Prichard post compared the Science Benchmark numbers to the total number of all degrees awarded in the state even though a large number of those degrees require little, if any science. There were no corrections for out of state students in the totals, either.
It seemed clear that if we are talking about the Science Benchmark, we should look at how it corresponds to the later award of STEM degrees, not all degrees.
I thought that while the data available isn’t superb, Prichard’s analysis was still inappropriate. My concerns ultimately led to the blog you are reading now, and as things turned out in the graph above, I was right in my reservations about the Prichard analysis.
For more on the data sources and how I did my calculations, check out this freedomkentucky.org Wiki item.