74 Interview: Harvard Researcher David Deming Takes the Long View on Head Start, Integration
See previous 74 interviews, including former secretary of state Condoleezza Rice, Senate Education Committee Chairman Lamar Alexander, former education secretary Arne Duncan, and researcher Kirabo Jackson. Full archive here.
But Harvard University professor David Deming plays the long game. Deming has produced several studies looking at the far-reaching effects of education policies on students, including their college attendance, adult earnings, and involvement in the criminal justice system.
In a recent interview with The 74, Deming talks about the lasting influence of Head Start, test-based accountability, and school integration. He also discusses how his research might impact the debates on early childhood education and implementation of the Every Student Succeeds Act.
The interview has been lightly edited for clarity and length.
The 74: Can you tell me a little bit about your work on Head Start? I think it’s obviously really important now — some people want to get rid of the program. What have you found?
David Deming: This was the very first real academic paper I ever wrote, actually, on Head Start, and this is a good lesson for aspiring graduate students. It started off as a replication of an older study about Head Start, finding that the impacts of Head Start, “fade-out” for African-American children who participate in the program — meaning there’s an initial impact on achievement and then by age 10 or so, there’s no difference between kids who are in Head Start and their siblings who were not in Head Start. So the initial impact in test scores fades out.
I was interested in this fade-out question as a graduate student, and so I said, I’m going to kind of replicate these findings and see if I can understand the fade-out patterns. When I got the data, I realized actually there was 15 more years of data since that paper was written and now these kids are 19, 20, 21 years old and I can look at longer-run outcomes, and so I did that.
What I found was that I was able to replicate the results, so that’s a win for science. I found that even though the impacts of Head Start faded out, there were long-run impacts on the things that we really care about, like high school graduation, college attendance, self-reported health, and then labor force participation, so whether you were working. The interesting thing about it was that it was actually the African-American kids and lower-income kids — kids whose mothers had lower academic achievement — they experienced the biggest benefits from Head Start and they had the most fade-out. It actually seemed like fade-out was somehow predicting longer impacts.
In the opposite of the way that you would expect?
My lesson from that is I don’t think we really know why test scores fade out, but in this case, I don’t really care, because test scores are only useful to the extent that they correspond to something in a kid’s life that we care about directly.
The bottom-line question is whether Head Start helps kids in the long run, and the answer is yes. My study is not the only one that finds that. I think that the evidence is pretty strong that early childhood interventions in general and Head Start in particular benefits low-income children and passes a benefit-cost test. Meaning the expenses of the program are less than if you try to sum up the total benefits, not just to kids but to society — that Head Start is a program that literally pays for itself.
When someone points to the most recent federal study saying the benefits are very short lived, that’s not a strong basis for rejecting the program, right?
Not by itself. Certainly we’d like to see bigger test score benefits. It is possible that either Head Start or the alternatives available to the kids who didn’t get Head Start have changed. Re-analyses of the Head Start impact study have looked at this. One is by Patrick Kline and Chris Walters and another by my colleagues Luke Miratrix, Avi Feller, and Lindsay Page, both papers answering the same question, which is: How do we understand the impact of Head Start relative to not being in any preschool, and then the impact of Head Start relative to the other available alternatives?
The benefits of going to Head Start relative to going nowhere are very large and do not fade out. However, when you compare kids doing the Head Start to, let’s say, another preschool alternative in the city, whether it’s a state pre-K program or another private pre-K, those benefits seem to be there too, but are a little bit smaller and do fade out.
The lesson there is that Head Start is much better for kids than nothing and Head Start might be a little bit better than other programs but might not and we’re not sure about that. That is not the way the results of the Head Start impact study have been reported.
Let me pick a nit with your study. You are comparing siblings, one who went to Head Start and one who didn’t. How were you able to ensure that there wasn’t some reason that one did and one didn’t and that may have caused differences in outcomes?
I’m not able to tell you that for sure — that’s the reality of it. Of course, selection into Head Start within the family is clearly not random in a sense that people are throwing darts at a board, but what we hope is that it was not related to other things that determine differences between siblings.
Here’s a story that, if you have two children and you really like one of them better and so every time you have a chance — let’s say you can only pick one kid to go to some program, you always pick the same kid. Then that’s going to make Head Start look better because the reason you put that kid in Head Start and not the other one is because you like them better.
Maybe you also sent that kid to summer camp; you are just a better parent to that kid.
Exactly. So I tried to account for that, by looking at variation across siblings in things that were determined prior to Head Start, like, did the kid who is in Head Start have higher test scores at a really young age? Did they have higher birth weight? Was the mom more likely stay-at-home then, or the dad? Were they more likely to be nursed? A lot of different things, and I found no systematic evidence that the kids who went to Head Start were different on the need dimension than the kids who weren’t.
Let’s talk about your work on accountability, which is a hot topic now under ESSA. Can you tells us about your study in Texas?
Sure. So, we were interested in the question of the longer-run impacts of school accountability.
Which have rarely, if at all, been studied before your research?
That’s right, and I think while it’s always important to look at the long-run impacts in everything you do in schools, it’s particularly important in the case of accountability, because there are ways of raising test scores that are associated with kids learning more and doing better, and there are ways of raising test scores that have strategic responses. There is a larger literature in economics and in the fields looking at all the ways that schools respond to accountability.
Many studies have found things ranging from making school lunches more nutritious to suspending kids strategically to keep them out, all that stuff, classifying kids for special education, all that stuff.
I think what a lot of people have done is, they said, “Well, we’ve found evidence that schools respond strategically to high-stakes testing,” and taken that by itself as evidence that accountability is bad. It could actually be the case that while some schools are responding strategically, they are also responding in ways that are good for kids — maybe you put your best teachers in the testing grade, and that’s unfortunate, but then you also hire extra teachers to tutor kids on the side, or you are better aligned in the curriculum with standards or whatever. Accountability is just a package of things, and you kind of want to say, on net, when we raise kids’ test scores, are they better off in the long run?
In Texas we looked at this — we compared schools that were close to the margin of being rated low-performance versus acceptable, and then acceptable versus recognized. Recognized is the gold star of schools that are doing well.
We actually compare schools to themselves in a future year when they were kind of safe from accountability pressure. There was a very large range in which schools are rated acceptable. If the pass rate of the lowest score in a subgroup was between, sometimes, 25 and 60 percent, the school was safe.
If you looked at your group of kids and you thought, “About half of them are going to pass,” probably nothing you are going to do is going to get them from 50 to 60 percent passing in one year, and probably nothing you do is going to get from 50 to 25 percent students passing. You are like, “OK, I’m facing some accountability pressure because everybody is, but not as much as the school down the street from me which is at 59 or at 26, where only a couple of kids passing or failing could make the difference for me.”
What happened was, in Texas, over time, the state ratcheted up the standards every year. The minimum threshold to be acceptable went from 30 to 35 to 40 to 45 percent passing. Every year it went up. What that means is a school could have the same population of kids to be safe one year and not safe for the next.
We said, “Let’s compare what happens in those schools when they are safe versus when they are not.” What we found was that for schools that were on the margin between being rated low-performing and acceptable, when those schools were on that margin, the test scores of low-scoring kids went up, which is consistent with many other studies. Then we followed those kids into adulthood. What we found was those kids were more likely to attend college, more likely to graduate college, and had higher earnings at age 25.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter