Tuesday, July 13, 2010
a jaundiced formula for spinning educational research into something that sounds interesting
Here's a jaundiced formula for spinning educational research into something that sounds interesting. Most researchers and reporters seem to follow this formula pretty closely*.
1. Sample a bunch of kids in category A, and a bunch of kids in category B.
Ex: A kids have computers in the home; B kids don't
Ex: A kids are white; B kids are nonwhite
Ex: A kids go to charter schools; B kids don't
2. For each group, measure some dependent variable, Y, that we care about.
Ex: grades, SAT scores, dropout rates, college attendance, college completion, long term impacts on wages, quality of life, etc.
3. Compare Y means for group A and group B.
3a. If the means differ and the A versus B debate is contested, take a side with the group A.
3b. If the means don't differ and many people support one option, take the opposite stance. (Ex: "Charter schools don't outperform non-charter schools")
3c. If neither of those options works, continue on to step 4.
4. Introduce a demographic variable X, (probably gender or SES) as a control or interaction term in your regression analysis. It will probably be significant. Claim that A or B is "widening the racial achievement gap," or "narrowing the gender gap," etc. as appropriate.
Papers following this formula will frequently be publishable and newsworthy. (You can verify this, case by case, with the studies cited in that NYTimes article.) They will rarely make a substantive contribution to the science and policy of education. Awful. Awful. Awful.
Why? Because this approach is superficial. The scientific method is supposed to help us understand root causes, with an eye to making people better off. But that depends on starting with categorizations that are meaningfully tied to causal pathways. The distinctions we make have to matter.
In a great many educational studies, the categories used to split kids are cheap and easy to observe. Therefore, they make for easy studies and quick stereotypes. They feed political conflict about how to divide pies. But they don't matter in any deep, structural way.
Example: Does having a computer in the house makes a kid smarter or dumber? It depends on how the computer is used. If the computer is in the attic, wrapped in plastic, the effect of computer ownership on grades, SAT scores, or whatever will be pretty close to zero. If the computer is only used to play games, the effect probably won't be positive; and if games crowd out homework, the effect will be negative. No real surprises there. And that's about as far as these studies usually go. "Computers not a magic bullet. Next!"
This is more or less the state of knowledge with respect to school funding, busing, charter schools, etc. We know that one blunt policy intervention after another does not work miracles. We haven't really gotten under the hood of what makes the complex social system of education work. It's like coming up with a theory of how airplanes fly based on the colors they're painted. ("White airplanes travel slower than airplanes painted camouflage colors, but tail markings have little effect on air speed.) You may be able to explain more than nothing, but you certainly haven't grasped the forces that make the system work.
To say the same thing in different words, scientists are supposed to ask "why?" Studies that say "kids in group A are more Y than kids in group B" doesn't answer the why question. They are descriptive, not causal. Without a deeper causal understanding of why schools work or don't work, I don't think we're ever going to stop chasing fads and really make things better.
*This is an epistemological critique of just about every quantitative article on education. In general, I'm supportive of the increasing influence of economic/econometric analysis in education policy, but this is one area where we quants may be making things worse, not better. Hat tip to Matt for sending the article that reminded me how much the failings of this literature frustrate me.