Research Question: How have different studies that attempt to measure the effectiveness of TFA on student achievement evolved over time, and how have the results changed?
“Teach for America Welcomes and seeks out rigorous independent evaluations as a means of measuring our impact and continuously improving our program.”
Six years ago, on October 5th, 2006, this quote appeared on the Teach For America website promoting the importance of continuous improvement and change in order to ensure the maximum efficiency of this still developing program. Today, six years later, hard evidence and research is entirely omitted from the website with the emphasis on personal accounts of corps members and emotion evoking quotes and statements. This pattern is not solely apparent within the Teach for America website, it is also apparent among multiple different studies conducted over time to measure the impacts of Teach For America. Beginning in the early 1990’s, the first evidence reflecting the impacts of Teach for America began to be produced by scholars. Multiple criticisms appeared ranging from inadequate training of corps members prior to their placements to a lack of improvement on reading scores in the classrooms. However, improvements were acknowledgeable, specifically in math where there was a statistically significant positive impact on students of Teach for America corps members. Continuing through the early 2000’s, studies continued to review the statistical impact of Teach for America, however results were beginning to appear far more negative than the beginning years of the program. Recently, a drastic shift in scholarly articles analyzing the effectiveness of Teach for America has began to occur. Statistical analysis was quickly dissipating and articles strongly supporting this organization began focusing on studies reflecting personal accounts of the program and theoretical analysis of its impacts. Studies examining the early years of Teach for America reveal hard evidence in regards to the benefits and drawbacks of a program with potential for change, whereas more recent studies draw on abstract personal accounts of success in order to promote a statistically declining program.
Based on a large sample of students from Houston Texas, Darling-Hammond, Holtzman, Gatlin, and Heilig, created a study comparing TFA corps members to certified teachers with similar amounts of experience from 1995-2002. It was found that from 1996-1999 there were significantly more certified TFA members than certified non-TFA members, however, from the 1999-2000 school year and on, this relationship was completely reversed and significantly more non-TFA members were certified than TFA members. This relationship can be shown by the graph below:
This decrease in the certification of TFA corps members has caused an overall negative effect on the program. In the earlier years of this study, when TFA members were more likely to be certified than non-TFA members, the impact of their teaching was positive, specifically in regards to the Texas Assessment of Academic Skills (TAAS) math test. However, in the early 2000’s, when the number of certified TFA members declined, the impacts were found to be non-significant or even negative in regards to improvements on scores. In response to this data, studies and articles have been written proposing reform and improvement to the flaws emerging from a seemingly promising program (Hopkins, 2008). Unfortunately, recent studies center on personal accounts and abstract theories in an attempt to disregard the emerging negative statistics.
Studies from 2011 and 2012 impeccably mirror the more recent changes to the TFA website over the past six years. The change neglects to acknowledge statistical feedback of the program and instead depends on personal experiences to fuel support and present the impacts in a positive light. As shown through studies such as Darling-Hammond et al., the statistics over time have unfortunately not been in support of TFA. However, more recent studies choose to ignore these findings and instead propose theoretical explanations for TFA’s success that neglect to use statistics in order to validate them. Maier presents his findings in regards to the credentialism theory, the theory that TFA is successful due to the background of the members selected to participate, both in regards to the fields they have studied and the prestigious universities they have studied at. Interestingly, this study and theory propose that since these corps members have an expertise other than teaching, they are more apt for success within the classroom than certified teachers who do not come from the same credentials. This theory and study also proceeds to discuss the incredible opportunities corps members experience after participating in TFA. The attrition rate, according to Darling-Drummon et al., for TFA corps members is incredibly high. Theories, such as the one presented in Maier’s study, promote the use of TFA as a stepping-stone to more prestigious jobs after the two-year teaching requirement, which in earlier studies was never presented as a main incentive to pursue this path.
Darling-Hammond, L. “Does Teacher Preparation Matter? Evidence About Teacher Certification, Teach for America, and Teacher Effectiveness Linda Darling-Hammond, Deborah J. Holtzman, Su Jin Gatlin, and Julian Vasquez Heilig.” Education Policy Analysis Archives 13, no. 42 (2005): 2.
“Home”, n.d. http://www.teachforamerica.org/.
Hopkins, Megan. “Training The Next Teachers For America: A Proposal for Reconceptualizing Teach for America.” Phi Delta Kappan 89, no. 10 (June 2008): 721–725.
Maier, Adam1. “Doing Good and Doing Well: Credentialism and Teach for America.” Journal of Teacher Education 63, no. 1 (January 2012): 10–22.