Panel Paper: Two to Tango? Combining Diagnostic Feedback and Capacity-Building for Schools in Argentina

Saturday, November 5, 2016 : 10:55 AM
Columbia 8 (Washington Hilton)

*Names in bold indicate Presenter

Alejandro J. Ganimian1, Rafael de Hoyos2 and Peter A. Holland2, (1)Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia, (2)The World Bank


Developing countries are increasingly administering student achievement tests and using their results to inform national, sub-national, and school policies. Yet, existing research on the impact of such policies remains limited. First, no study has evaluated the impact of simply administering student assessments. Typically, evaluations compare schools that have both administered student assessments and informed parents and/or teachers of their results to schools that have only administered student assessments (Andrabi, Das, & Khwaja, 2015; Camargo et al., 2011; de Hoyos, García-Moreno, & Patrinos, 2015; Duflo et al., 2015; Mizala & Urquiola, 2013; Muralidharan & Sundararaman, 2010; Piper & Korda, 2011).

Second, all evaluations have been conducted in lower-middle-income countries, where a large share of teachers are frequently absent (Chaudhury et al., 2006; Muralidharan et al., 2015). The effects of such interventions hold greater promise in middle-income countries, where teachers typically go to school, but struggle to use classroom time effectively (Bruns & Luque, 2014).

Third, no evaluation assesses the differential impact of combining diagnostic feedback with capacity-building for schools. Studies in high-income countries suggest that student assessment results can induce principals to update their views of the effectiveness of their teachers and encourage ineffective teachers to transfer schools or exit the school system (Dee & Wyckoff, 2013; Rockoff et al., 2012; Taylor & Tyler, 2012). However, this mechanism for school improvement has remained largely unexplored in developing countries.

We randomly assigned 105 public primary schools in the Province of La Rioja, Argentina to one of three groups: (a) a first treatment group, in which we administered standardized tests in math and Spanish at baseline and two follow-ups and made those results available to the schools at the beginning of each year (the “diagnostic feedback” group or T1); (b) a second treatment group, in which we did the same and also provided schools with support to design and implement an improvement plan (the “capacity-building” group or T2); and (c) a control group, in which we administered standardized tests only at the second follow-up.

This setup allows us to evaluate the causal impact of: (a) conducting assessments and providing timely and user-friendly information on the results from those assessments; and (b) conducting the assessments, providing the information, and providing schools with capacity-building to use this information. In fact, in comparing (a) and (b), we can estimate the added value of providing capacity-building to schools over and above providing the information. 

We conducted three rounds of assessments (in 2013, 2014, and 2015). In 2013, we assessed students in grades 3 and 5; in 2014, we assessed students in grades 3, 4, and 5; and in 2015, we assessed students in grades 3 and 5. Thus, we have a panel of grade 3 classrooms (since grade 3 students were always assessed) and of students (since students who started grade 3 in 2013 were assessed in 2013, 2014, and 2015). This allows us to compare the cumulative (two-year) effects of the intervention at the classroom-level and the differential effect of capacity-building at the student-level.