Confusing bill would destroy common state assessments in Kentucky

A very strange bill is actually working its way through the legislature. It has already cleared the Kentucky House and currently is in the Kentucky Senate’s Education Committee.

House Bill 92 could allow a waiver or modification of the statewide assessment system for schools participating in a district of innovation plan. The bill would also allow a district of innovation to use student assessments other than those required by the state board.

Now, here is the major puzzle. When Common Core was being debated, we were told our state needed common standards with other states so we could compare our assessment scores to scores from those other states. It didn’t work out that way, of course. It turns out that results from different tests are just plain different. Even though the same standards underlie almost all state assessments today, experts tell us Kentucky still cannot compare our KPREP scores to scores from any other state because KPREP is unique to Kentucky. No other state uses it.

Bottom line: establishing tests that truly are comparable is a really difficult challenge. Most often, even tests built around the same standards just are not the same.

So, here is my concern. If a District of Innovation uses some other test besides the state standard, will there be any valid comparison to what the rest of Kentucky is doing?

If Districts of Innovation go their own way on testing, what most likely will happen is we will lose all ability to fairly evaluate what is going on in those districts. Those districts could be failing miserably for their students and we would not be able to show that.

If any Kentucky district wants to use additional assessments, they can do that right now. Perhaps if a District of Innovation used both the new test and the state’s standard test for several years, comparability could be established. But, allowing a district to abruptly resign from the state’s assessment program isn’t technically defensible and in fact is just flat unacceptable. Furthermore, those who could suffer the most could be the children in those districts experimenting under the Districts of Innovation plan. Essentially, the state would lose all objective control over what was happening.

I have one more concern, to raise an issue that Democrats are using to challenge the current Senate Bill 1. The federal Every Student Succeeds Act (ESSA) was only recently enacted. The ESSA certainly does discuss state testing programs. However, the ESSA’s enabling regulations won’t be out for another year. Could adoption of HB-92 at this time be premature and especially unwise? We might wind up with a number of districts using a test for a year or two and then having to drop those tests, losing a lot of important trend data in the process.

To be very clear, I have reservations about Kentucky’s KPREP state assessments, and looking for alternatives is entirely reasonable. But, I am completely opposed to suddenly trashing trend lines, too. A far better way to transition to a new test, if a District of Innovation wants that, is to run a sample program with both tests for several years to see how the two tests compare. Don’t abandon the existing state test until a good trend line is available. That is the scientific way to do this, but – more importantly – it’s the right way to do this to protect students.


  1. Kathlene Zanardelli says:

    I wholeheartedly agree with the author’s point & recommendation to ammend the requirement for Districts of Innovation (what does this term mean? I hope that it doesn”t mean districts w a large %age of students w failing scores…)

    There absolutely must be comparable measures for student /district evaluation.

    • Richard Innes says:

      Thank you for your support, Kathlene.

      Districts of Innovation are not in general populated with an exceptional number of students with low scores, but rather are selected through an application process to be allowed to get waivers from state laws and regulations to try innovative education approaches. The program is too new to really draw conclusions about performance, hence our concern about losing comparison information about performance while the efficacy of the innovations being attempted is still unknown.