School choice—a resounding success! Or is it?
Across the nation the popular rhetoric used to describe school choice is glowing. Describing Connecticut’s choice system, newspaper headlines proclaim, “[Choice programs are] a major contributor to closing the achievement gap” 1 and “Students [in school choice programs] are improving each year!” 2. The much talked about full-length documentary, Waiting for Superman, holds charter schools and parent choice up as the last hope for our urban students to succeed 3. But in reality, many of these assertions are made based on a faulty comparison. The current rhetoric used in the public sphere about choice schools and student performance is not accounting for the fallacy of selection bias.
Measuring the achievement impact of choice schools compared to traditional public schools on students is very difficult. The only true comparison would be to take advantage of a parallel universe in which one could compare students who attended a choice school in one universe with the very same students who simultaneously attended a public school. If this technique was possible, many researchers would be out of a job.
This article points out the flaws in many evaluations of choice schools, and highlights several ways to mitigate and improve school choice analysis. Additionally, using a robust data set, I provide original analysis that accounts for some these issues and situate the findings in a broader context.
Selection bias—the problem that plagues all school choice studies
To investigate the effect that school choice has on student outcomes, researchers leverage statistical tools to try to make the most accurate comparison. The issue we are most concerned about when trying to make this comparison is selection bias. Selection bias occurs when the population of students you are looking at is not random but is self selected. In the school choice debate, we worry about selection bias when the families who chose to apply and attend a charter school are even slightly different that the families who just end up keeping their kids in traditional public schools. The problem arises when we try to compare these two groups. It may be that the difference we observe in test scores is really due to the dissimilarity in the family characteristics rather than in the effectiveness of choice or traditional public schools. Herein lies the challenge: How do we make a true comparison of student outcomes between choice schools and traditional public schools?
Virtual twin method—one way to minimize the impact of selection bias
The CREDO team at Stanford University came up with a method called “virtual twin” to try to make better comparisons. The CREDO reports uses measurable student characteristics and prior achievement to match students in charter schools with students who attend public school in their same school district. For example, CREDO compares two students with similar prior test scores both coming from low income and high parental education families, but one student now attends a charter school and the other attends a traditional public school. They do this with many pairs of students or “twins” to curb selection bias and make a better comparison between the two school types. Using this methodology in 2009, the CREDO team found that only 17% of charter schools outperformed traditional public schools, while 46% did worse, and 37% had no statistical difference. 4 They repeated this study on a slightly larger sample of students in 2013 and found that charter schools on average performed slightly better than in the 2009 study 5, but that at the end of the day, an average charter school is just average.
Click start, and use the arrows to navigate through the graphic.
The virtual twin methodology is not perfect, because not all factors can be matched. There still may be some unobservable differences between students who attend charter schools compared with their public school peers. For example, a family that takes the time and effort to apply to a charter school, might be more involved in their student’s education than a family that just sends their student to the neighborhood school, and that might be why we see choice school students performing better than the traditional public school students. In other words, the result may be driven by the unobservable characteristics of the students who attend charter schools, rather than the actual effect of the charter school themselves.
Randomization—another way to address the problem of selection bias
Using another method to mitigate the issue of selection bias, some researchers take advantage of the randomization inherent in a charter school lottery. When charter schools receive more applications than spots available they are required to hold a randomized “lottery” to determine which students receive a spot. In a large study of charter schools, Gleason et al, (2010), compared the achievement of students who won charter lotteries and attended charter schools to students who lost charter lotteries and attended traditional public schools. Since the lotteries are random, we assume on average, there is no difference between the people who won and lost the lottery 6.
Click start, and use the arrows to navigate through the graphic.
Randomized trials are the closest one can get to a perfect comparison. The methodology helps mitigate the selection issue present in the CREDO study, since the student population they are comparing, the winners and losers, both have the unobservable characteristics that lead to a family applying to a charter school. The Gleason study finds, on average, that there is no statistically significant impact of charter schools on student achievement. Similar to the CREDO studies, Gleason reports positive outcomes for students with low-SES backgrounds. But even this study with randomized design has it’s limitations. For example, only schools that receive more applications than spots use a lottery, therefore the charter schools analyzed in this study were charter schools that received lots of applications, potentially meaning they were on average better charter schools.
Big Data Analysis—a third method to account for selection bias
I set out to find a different method to add to the current understanding of the effect school choice has on student outcomes, taking into account the main issues involved in investigating student outcomes, including selection bias and the unobserved factors that come with it. Increasingly, researchers are collecting data about students over time, in what are referred to as longitudinal studies. These studies often involve capturing data about large numbers of students via surveys, resulting in large data sets. I decided to use one such data set, from the High School Longitudinal Study of 2009. By using a variety of variables focusing on student achievement, family background, and school characteristics from the High School Longitudinal Study of 2009 (HSLS:09) I wanted to see if I could shed light on the school choice debate.
The HSLS data set is comprised of nearly 24,000 9th graders selected randomly from 944 schools. Students, parents, teachers, administrators, and counselors are all surveyed to collect a wide variety of data on both the students and their learning environment. Called multi-level surveying, this data, in concert with students test scores provided a rich data set for analysis. For an extended explanation of these data, click here.
One of the main issues with using survey data is that it is impossible to account for every potential factor that determines student achievement. In order to isolate the true effect of participating in a school choice program, it’s necessary to hold constant every other potential difference between students. This is obviously an impossible task, especially considering the many unobserved and unmeasurable factors that are present, such as differences in student motivation or innate ability. However, there are analytical and statistical strategies to help control for these difference and isolate the true relationship between school choice and student achievement. I used a variety of these student, parent, teacher, and school controls to try to measure the underlying components that affect a student’s test score.
I set out, assuming that five factors are most important in determining student achievement.They are: 1) whether or not a student attends a choice school, 2) students’ demographic characteristics, 3) students’ motivation, 4) a student’s parental characteristics, and 5) a students teacher characteristics. If we had the data and could measure all of these underlying factors, we could make a convincing case for the accuracy of our estimates to truly measure the effect of choice on student achievement.
Unfortunately, many of these underlying constructs are unobservable, not measured, or have layers of complexity. To mitigate this issue, I used factors I could measure that get at the underlying construct and are highly correlated with these unmeasured factors. For example, when looking at student motivation, I controlled for whether students think getting good grades is important and whether students think they will graduate from college. The hope is that high student motivation, an unobservable characteristic, will overlap sufficiently with students who think getting good grades is important and expect to graduate from college to serve as a proxy. For results to be reliable, these relationships need to be highly correlated but not necessarily perfectly correlated. This is because when working with a very large number of students, as I was, one can begin to see that on average these factors will account for motivation. The rest of our proxies are displayed in table 2. Click on each underlying construct for a deeper examination of the variables used to measure them. Click here for a full breakdown of the model used in estimation.
The Tricky Bit—How to Account for Selection Bias
In the context of these data and techniques, how did I compare students in choice schools to students in traditional public school knowing that that difference in decision might be because of some unobservable characteristic obscuring the true comparison between choice students and traditional public school students?
My hypotheses going in to this study is that when first looking at choice schools on student achievement I would see a positive effect because of selection bias; I expected that the students in choice schools would be systematically different from those in traditional public school due to parental factors that affected their selection of a choice program. However, after explicitly controlling for parental characteristics, and making a much more valid comparison between students in both types of schools, I expect the initial positive result will not persist.
To control for this confounding factor, I used a variety of controls to account for a wide variety of parent involvement. I considered whether or not a parent attends any meeting at the school, a parent teacher organization meeting, or a parent teacher conference. I took into account whether a parent volunteers at their child’s school or helps fundraise for the school. I also considered parents’ expectations of how far they think their student will get in school, and whether or not they help their student with their homework. My assumption was that together all of these variables account for and overlap sufficiently with the unobservable characteristics that choice school families have that would affect student achievement. Although these factors do not directly account for the underlying construct I argue that these characteristics would signal and proxy for the unobserved ones.
The strength of this approach is that it addresses the issue that comes in to play with the Virtual Twin methodology—selection bias, and it gets around some of the main issues of randomization including only looking at over-subscribed schools. The weakness of the method I used is needing to rely on my proxy strengths without being able to actually tell if they sufficiently account for selection bias. I argue that the above variables account for enough of the underlying factors of student achievement for our results to be unbiased.
Click here for the descriptive statistics of all the variables used in estimation.
Using data from the High School Longitudinal study of 2009 (HSLS 09) and the above methodology, I indeed found that when initially looking at the relationship of participation in a school choice program and student learning, there exists a positive effect for students of low socioeconomic status. This result explains some of the promise and glamour that the idea of school choice receives. However, after using more robust methods and explicitly controlling for the difference in students and families that chose to attend choice programs, the once promising result, disappears.
To arrive at this conclusion I first compared the achievement of students who went to choice schools to that of students who went to traditional public schools while accounting for their race, socioeconomic status and intrinsic motivation. I found that attending a choice school had a positive impact on students from low socioeconomic background. Results based on simple comparisons like this are constantly held in the media as evidence of the positive impact of school choice. To account for the issue of selection bias and the potentially unobserved parent characteristics as the possible reason choice students appear to perform better in my first comparison, I next also accounted for the parent-related variables. Using these controls, I found that, on average, students in choice school perform no better than students in traditional public schools. This result confirms my hypothesis and corroborates other literature indicating that after accounting for selection bias, on the whole choice schools do not outperform traditional public schools. Lastly, when accounting for teacher quality, the results remain the same. Click here to see the full table of regression results.
In summary, looking at the simple relationship between choice schools and student achievement, I found a positive effect of choice schools, consistent with popular claims made in the headlines. However, when accounting for the observed and unobservable differences in data, these once promising results do not persist.
There are limitations to this study. Without random assignment there is no way to be sure that we fully accounted for selection bias. I can make an argument, and I hope that I have, that my methodology accounts for selection bias, but we will never know for sure. One indicator that this study may sufficiently account for selection bias is that its results are consistent with randomized studies on schools choice that also find no relationship between choice and student outcomes 7 8.
Additionally, it is worth noting that this study looks at choice schools on average. This does not mean that no choice schools are outperforming traditional public schools. Rather, it means that as a whole the choice school reform movement is not outperforming the status quo of traditional public schools. Further, this paper also does not distinguish between types of school choice. Because of data limitations charter schools, magnet schools, and voucher programs were clumped together.
With school choice becoming increasingly popular among reforms it is crucial to investigate its actual effect on students. Although there is a large body of existing research, it is important to keep looking for pieces in the solution to bring better educational opportunities to students as policies shift and school systems progress. A single assessment of the choice system alone will not provide enough evidence on it’s own, but using an abundance of data and a range of techniques, we can continue to fill in more and more of the picture.
Next time reading about a school choice success, don’t accept the result outright. Make sure to consider the comparison they are making, and ask: Are these two groups are equivalent? Has the study sufficiently accounted for the unobservable differences between students in choice schools and students in traditional public school?
- Ken Imperato et. al., “Choice Program Data and Emerging Research: Questioning the Common Interpretations of Publicly Reported Indicators of Choice Program Success” (Magnets in a School Choice Arena, Goodwin College, East Hartford CT, December 12, 2013),http://www.goodwin.edu/pdfs/magnetSchools/Kenneth_Imperato.pdf. ↩
- De La Torre, Vanessa. “Hartford ‘Sheff’ Students Outperform Those In City Schools,” September 12, 2013. http://articles.courant.com/2013-09-12/community/hc-hartford-sheff-scores-0913-20130912_1_open-choice-sheff-region-hartford-students. ↩
- Guggenheim, Davis, Billy Kimball, Lesley Chilcott, Bill Strickland, Geoffrey Canada, Michelle Rhee, Randi Weingarten, et al. 2011. Waiting for “Superman”. Hollywood, Calif: Paramount Home Entertainment. ↩
- Center for Research on Education Outcomes (CREDO). 2009. Multiple Choice: Charter School Performance in 16 States. Stanford, CA: CREDO. ↩
- Center for Research on Education Outcomes (CREDO). 2013. National charter school study 2013. Stanford, CA: CREDO. ↩
- Gleason, Philip et. al. The Evaluation of Charter School Impacts: Final Report. NCEE 2010-4029. National Center for Education Evaluation and Regional Assistance, 2010. http://eric.ed.gov/?id=ED510573. ↩
- Bifulco, Robert, Casey D. Cobb, and Courtney Bell. “Can Interdistrict Choice Boost Student Achievement? The Case of Connecticut’s Interdistrict Magnet School Program.” Educational Evaluation and Policy Analysis 31, no. 4 (December 1, 2009): 323–45. doi:10.3102/0162373709340917. ↩
- Gleason, Philip, Melissa Clark, Christina Clark Tuttle, and Emily Dwoyer. The Evaluation of Charter School Impacts: Final Report. NCEE 2010-4029. National Center for Education Evaluation and Regional Assistance, 2010. http://eric.ed.gov/?id=ED510573. ↩