*Names in bold indicate Presenter
Because of the complexity of the SFA program and the difficulty of training evaluation staff to observe the program, the evaluation relies on an SFA developed rating system for its core measure of fidelity. As a tool for the coaches, SFA has developed a detailed school fidelity rating (the current version has 99 measures) that coaches complete to summarize the status of the many aspects of the SFA reading program. The coaches use the Snapshot to rate schools in three domains: Schoolwide Structures, Instructional Processes and Student Engagement. Schoolwide Structures includes systems to assess student progress in academics and behavior, and to organize staff to monitor instructional and non-instructional goals. Instructional Processes items concern how teachers manage instruction and student behavior in the reading classroom. Student Engagement items address student behaviors and cooperative learning. In each of these content areas, the Snapshot differentiates between items that are high-priority for implementation in the first year and those that may be implemented later. It measures how many program structures were implemented at the school level, and what proportion of the school’s reading classrooms demonstrated clear use of the instructional and student engagement practices, but it does not measure dosage or quantity at the student level.
The evaluation team worked with SFA to identify those measures that were relevant for first year operations and those that became important over time and to set weights for the various measures. The evaluation team also worked with SFA to understand how the snapshot is used by coaches to see how forthcoming the coaches were in the rating (it is not shared with schools and serves as a devise to organize further technical assistance). The team faced a serious data reduction task to distill this complex rating system to summarize implementation. In addition, the team supplemented this with on-site field research to understand how principals, teachers, and on-site SFA facilitators perceived program implementation and a different strategy had to be used to assess services in the control schools. The experience highlighted the tradeoffs in the use of an operational rating system for evaluation: lower costs for frequent observation of programs and detailed knowledge of desired services, but a need for considerable analytic effort to turn the detailed rating into a summary tool for fidelity analysis.