What was Project Follow Through? with Linda Carnine, Susie Andrist, and Jerry Silbert


Welcome back Zig fans, my name is Dr. Zach Groshell. I am a teacher, a parent, and the host of this show, the Direct Instruction podcast.

The DI podcast is brought to you by NIFDI, which stands for the National Institute for Direct Instruction, and today we are going to be talking about Project Follow Through. I will be interviewing a panel consisting of Linda Carnine, Susie Andrist, and Jerry Silbert, 3 incredible people who were there, on the ground, during Project Follow Through.

But what was Project Follow Through?

To answer that question, I’ve asked Jean Stockard, an empirical social scientist, if I could read you an excerpt of her writing on this topic. Jean will be coming on the Direct Instruction Podcast next episode to talk about the effectiveness of Direct Instruction, based on her synthesis of over 5 decades of research. It is a fascinating interview that should not be missed.

So, let’s begin, shall we?

Project Follow Through was probably the largest study of educational interventions that was ever conducted, either in the United States or elsewhere. While it is now largely forgotten, at the time it embodied many of the hopes and ideals of those who wanted a more just and equitable society and believed that education had an important role to play in those endeavors. Follow Through emerged from President Lyndon Johnson’s “War on Poverty,” announced in his 1964 State of the Union address in Congress.

Project Follow Through was originally conceived as a service project that would extend the types of support provided in Head Start to students in the primary grades. When it became clear that the cost of such an endeavor would be very large, the purpose was changed to determining the most effective educational interventions for students from low-income households. The Office of Education developed a research design, called “planned variation.” In contrast to a carefully controlled laboratory setting, this design would involve the implementation of educational innovations in real-life settings, but in the very best way possible. Sponsors of these innovations were required to “provide the community with a well-defined, theoretically consistent and coherent approach that could be adapted to local conditions,” and implement a “total program, rather than a small fragment, with a resulting possibility for a major impact on the child’s life.” Participating districts received supplemental funding of $750 for each Follow Through student to support additional costs for aides, materials, and staff travel. In addition, all children were provided health and dental care as well as nutritious food through meal programs. In total, Follow Through served over 10,000 students from low-income households in 180 communities at a cost, at that time, of 500 million dollars, a research expenditure that will likely never again be matched.

Eighteen educational programs were initially involved in Follow Through. The programs represented the most popular educational approaches at the time, but varied in theoretical orientations and basic assumptions about how children learn. All, except Direct Instruction, had been developed by academics strongly influenced by educational theorists such as Jean Piaget or John Dewey. Importantly, each of the models, or their derivatives, is still prominent within education today in the various developmental, constructivist, inquiry based, and similar approaches.

Accounts of Follow Through usually divide the programs into three general groups based on their underlying assumptions. Some, termed “affective skills models,” assumed that socioemotional development was most important. Schools and classrooms that promoted children’s self-esteem and positive peer interactions and built on children’s own interests were thought to be most successful.

Other models were termed “cognitive and conceptual skills models.” These approaches were based on cognitive developmental theory and the work of Piaget, assuming that children from low-income households were behind their peers because they lacked sufficient cognitive experiences. These models incorporated approaches such as self-directed learning, a language-experience approach similar to whole language, and an emphasis on students’ learning styles.

The models in the third group were termed “basic skills models.” These programs assumed that behaviors are learned and that children from low-income households lagged behind because they had not been adequately taught. As would be expected, the evaluators placed Direct Instruction in this group. In addition to having very different views of key influences on student learning, the models differed in the extent to which they were structured, or teacher-led. The DI and Behavior Analysis models were the most structured, while the Bank Street and Open Education models were least structured.

Before starting to participate in Follow Through, all students were administered a nationally normed instrument that examined basic skills in language, reading and math. Other measures were used at the end of the school year. Affective skills were measured with an “Intellectual Achievement Responsibility Scale (IARS), which tapped students’ locus of control, thought to be a key element in students’ self-concept and self-efficacy. Cognitive and conceptual skills were measured with subscales of the nationally normed Metropolitan Achievement Test (MAT), which focused on conceptual elements of mathematics and reading, and “Raven’s Coloured Progressive Matrices,” a cognitive test thought to measure analytic ability and cognitive reasoning. Basic skills were measured with subscales of the MAT that focused on vocabulary, math computations, and spelling.

Recognizing that it can take time and care to fully implement new programs, the Office of Education specified that no data would be published until the programs had been in place for eight years. They believed that this would allow substantial time for schools, teachers, and students to demonstrate the results of their best efforts. 27 This extensive time frame, coupled with the wide range of assessment data, let the researchers compare the impact of the various programs in several ways: Did results differ across measures? Did results vary with the amount of exposure to a program? Did cohorts who entered the program in later years, when their programs were more established, have better results than those who entered at the start? Did results vary with different methods of analysis, as for example comparing to the control schools or to national norms? And so forth.

Abt Associates were responsible for analyzing the data, and their official, formal evaluation was released in 1977. In almost all respects, the analysis appears to have been very carefully conducted. The results were clear-cut and strong. Students from the DI sites significantly outperformed students in the comparison schools in all three of the areas that were measured: basic skills, cognitive skills, and affective measures. No other program had positive results in all three areas, nor did any other program have as many positive results as DI. Perhaps most striking was the lack of association between the stated aims of the programs and the outcomes. Programs designed to promote cognitive development had no significantly positive results and two of these programs) had large numbers of negative results. Similarly, none of the programs designed to promote affective skills had any positive significant results on the affective measures, but substantial numbers of negative significant results. The results were the same across all of the measures that were used and with different types of comparisons and analysis. This pattern of strong results in favor of DI held when results were examined separately by race-ethnicity, geographical region, years of exposure to the program, and initial test scores.

Even though the first official results of Follow Through were not published until 1977, preliminary results were available to sponsors by 1974. 32 These reports made it clear that DI and, to a lesser extent, the Kansas Behavioral Analysis model, were the only approaches that were successful. As one could expect, these findings and the prospect of them becoming widely known were deeply disturbing to the sponsors of other programs. The other programs reflected deep seated beliefs within the educational establishment, such as the importance of developmental and cognitive factors in promoting student achievement. Any indication that they were not effective could represent a profound challenge to their legitimacy. As one observer put it, the preliminary results from Follow Through were a “horrifying surprise” to these sponsors.

At the same time, most people affiliated with these sponsors were established figures in education with strong ties to foundations and federal funding agencies. They used their power and connections to counter the Abt report and eventually prevent the findings from having any influence on educational policy. With funding from the Ford Foundation, a panel of four education professors published a critique of Follow Through’s evaluation. They suggested that the outcome measures were unfairly selected and inappropriate, even though all sponsors had approved their use. They also claimed that the primary statistical technique (analysis of covariance) was “controversial,” a notion that would seem odd to most statisticians today as well as at that time. Most important, they challenged the central, original, stated purpose of the project – finding “which model works best” – claiming that it was inappropriate. Instead, they argued, the evaluation should have addressed questions such as “what makes the models work” or “how can one make the models work better.”

The final, official statement on the results of Follow Through simply reported on the programs as an aggregate, with no details about the results from individual sponsors. In other words, results from all of the programs were grouped together. Because only one of the nine programs, DI, had consistently positive results, its success was disguised within the combined findings. Thus, the official statement from the federal government was that Follow Through had failed, neglecting to mention that one program had succeeded.

In subsequent years the programs that were found to be ineffective in Follow Through have continued to receive substantial federal funding and there has been no official acknowledgement of differences in their performance. The federal government continues to spend extraordinary amounts of money to revisit the original question posed by Follow Through, trying to determine what are the most effective educational interventions, apparently with no recognition of the results obtained in that project.

Now, more than 50 years after the start of Project Follow Through, I’m excited to bring to you an interview with three of the pioneers of Direct Instruction, Linda Carnine, Susie Andrist, and Jerry Silbert. Let’s go over to them now.




Posted

in

by

Tags:

Comments

One response to “What was Project Follow Through? with Linda Carnine, Susie Andrist, and Jerry Silbert”

  1. […] largest education experiment ever conducted, Project Follow Through, a decade-long research project pioneered by Lyndon B. Johnson’s War on Poverty, concluded that […]

    Like

Leave a comment