The U.S. Department of Education has released some preliminary results on the effectiveness of the School Improvement Grant (SIG) program, a $545 million annual program into which the Obama administration poured an additional $3 billion in 2009 stimulus funds to “turn around” failing schools. Our Ed Money Watch colleague Anne Hyslop has an excellent rundown of the SIG data, but we wanted to highlight a few key pieces on how early education seems to have fared.
The big news for the early education world is that “A larger proportion of elementary schools posted gains in the first year of the SIG program, compared to middle and high schools, and they were less likely to see declines.” But talking about the implications of this finding, a healthy dose of skepticism is required.
Here the thing: One year’s worth of data on turning around a failing school is not very valuable. Keep in mind that it’s too soon to be able to judge the effectiveness of the program. These schools were failing, after all, so everyone should temper their expectations about what can happen in a year. With that massive caveat, here are some other issues with the data that Hyslop points out:
We have no idea whether the data issued by the Department are statistically significant. We also have no data on other harder-to-measure aspects of a schools’ climate and potential changes to the community in which its students live.
As Hyslop points out, “Without at least addressing these issues, it is impossible to know whether changes in student performance were even attributable to changes in school leadership or culture (i.e. the SIG program) rather than conditions in the economy or students’ home lives...” Standardized test scores may not be the best metric by which to measure the SIG program’s success. After all, improved test scores aren’t the only goal of the program. Next year, we will see more data – on teacher and leader attendance, advanced course enrollment, and other measures – but areas like school leadership, school culture, and parent involvement remain unreported, Hyslop explains.
A few other things to keep in mind. SIG programs were permitted to use funds to create pre-K or full-day kindergarten programs, although they weren’t required. As we’ve pointed out, one problem with rapid turnaround efforts is that they don’t give enough time to demonstrate the effectiveness of early childhood programs. It will be at least three of four years before we can measure any gains in third grade standardized testing that may come from a newly implemented pre-K or full-day kindergarten program.
So given the limitations of the new data, what about the finding that showed turnaround efforts in elementary schools to be more effective that middle and high schools? We have a couple of questions for the Department:
Will the Department judge student performance in first and second grade? Since state-wide standardized testing doesn’t usually begin until third grade, often that data is left out. And yet we need to pay attention to how children are faring in the early grades to inform any changes in instruction to those grade levels and to predict what teachers will need to do once those children reach the later grades.
Did access to pre-K, full-day kindergarten or other early childhood programs affect children’s outcomes in elementary school? What percentage of children in these failing elementary schools attended pre-K or full day kindergarten?
Especially when it comes to early learning, we need much more data from these “turnaround schools” to arrive at definitive findings, which hopefully is being collected in the “gold-standard” longitudinal research being conducted by the Institute for Education Sciences that will eventually be released. And we will need at least a few more years before being able to make lasting judgments about the School Improvement Grants’ effectiveness. Any major conclusions being drawn about the program are almost certainly premature.