*Names in bold indicate Presenter
This study was a random-assignment effectiveness trial of PITC, implemented during 2007–2010 in six Southern California and four Arizona counties. The study sample (n = 251) included 92 child care centers and 159 licensed family child care homes with 936 children. Child care programs were the unit of random assignment. Data were collected at baseline (before random assignment) and in two follow-up waves conducted 14 and 23 months after random assignment.
Observational measures of global program quality, staff-child interactions, and subscales measuring critical PITC policies were collected at baseline and at follow-ups. There were no overall impacts on any indicator of quality at either time period.
Children’s cognitive and language skills as well as both positive and negative social behavior were measured at both follow-ups. Overall, there were no significant impacts on PITC on any of the child measures. Subgroup analyses indicated that, for children who were 18 months or older at baseline, those in PITC settings had lower positive behavior scores than those in control settings, and there was a borderline tendency (p < .055) for them to perform less well on cognitive and language measures. A comparison of children in child care centers vs. child care homes indicated a borderline tendency (p<.051) for that those in PITC centers to have lower levels of positive social behavior than control children in centers.
As an intent-to-treat study, this evaluation measured effects on all children who enrolled in the study and were randomly assigned, including those who left their child care settings well before PITC was fully implemented. Although this design maintained internal validity, it reduced the treatment-control contrast because about 25.0% of treatment children received minimal or no treatment because of programs dropping out or children leaving the care setting.
Study results are unexpected and raise a number of questions. PITC incorporates features that are likely to have positive effects: focus on relationships, on-site consultation, opportunities for assessment and feedback, and application to practice. Lack of effects on program quality underscores the difficulties of sustaining participation in an intensive, long-term intervention in a large number of community child care settings across geographically dispersed locations. In many cases, the training was not begun or completed, and, in centers, there was an unknown amount of turnover in personnel.
In future research, testing settings with fully-implemented PITC features would provide a better test of the efficacy of the program, and separate investigations of effective and ineffective training methods would be useful. If a well-implemented program had positive effects on child development, then tests of effectiveness and implementation in real-world settings permit clearer interpretation.