Panel Paper: Evidence Building Opportunities in the Chicago Public Schools

Thursday, November 8, 2018
Marriott Balcony B - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Amanda Moreno, Erikson Institute


The i3 Development grants that Erikson received certainly provided evidence-building opportunities that we would not have otherwise had. As the 3rd largest district in the country, with enormous challenges including hyper-segregation, funding shortfalls, and union disputes, Chicago Public Schools (CPS) is notoriously challenging to conduct research with. The fact that these projects could offset the cost of programs and services that the schools desperately needed anyway, is what enabled us to get in the door and begin telling the story of the effectiveness of these new strategies.

For the mindfulness grant, the i3 evidence-building requirements were significantly aligned with our interests and priorities, but there was so much more we would have liked to do. This is particularly challenging in the early grades, where proctored surveys are not possible, and testing is not consistent across schools (making administrative data not a significant resource). In short, time was a significant barrier and, particularly with respect to a development grant, in hindsight we might have been wise to conduct longer or more testing sessions with fewer students.

Continued efforts to build evidence are very important to us and without question, the i3 initiative opened doors in this regard. For example, we developed an app within the original project and although we could not test its independent effects, we believe we are much better positioned to acquire funding for that next step because we will soon be publishing data showing at the very least that the app is engaging, feasible, and that teachers think it supports their students. We also plan to conduct a study of “super implementers,” as a way of helping the field visualize how teachers can both implement with high fidelity as well as “make it their own.”

For the early math project, i3 evidence-building requirements spurred a deeper look at fidelity of implementation, which we found quite useful. Specifically, we noticed trends in attendance patterns during year 1 that inspired program changes in year 2, and we also found participation rates predicting outcomes at the teacher level. There were, of course, pieces of evidence we would have liked to gather, such as growth in reading scores, particularly since there is evidence that math scores are predictive of these. If we could have shown that increases in math scores predicted increases in reading scores (or not), that would have been broadly useful.

We were able to build on the results of this i3 work to garner a National Science Foundation development grant that is bringing our techniques to bear in Head Start Centers across Chicago. We are in the third year of this randomized control trial now, and have vastly improved our internal systems for capturing fidelity, so that the evidence we gather will have a stronger qualitative aspect than our previous i3 work.

In short, consistent with growing interest in education research around treatment effect heterogeneity, our future evidence-building efforts (for both Erikson development grants) will be focused, in part, on what makes our strategies work and for whom.