'The Bell Curve' Turns 20
But two
decades later, the topics here are as important and feared as ever. As
technology becomes ever more complicated, and with an age of robotics on the
horizon that could upend the labor market, we wonder if there will long be a
valued place in society for people with low intelligence. Many already see a
college degree as a necessity for a decent life. The continuing rise of cheap
travel, opportunities for women, and college attendance has enabled the
brightest people to increasingly segregate themselves socially, solidifying the
"cognitive elite" Herrnstein and Murray wrote about. Racial and class gaps in test scores
haven't changed much and have often gotten worse, No Child Left Behind and Head
Start be damned.
With the
immediate furor over the book having subsided, and yet with the subject of
intelligence retaining its quiet power over modern discourse, it's worth
revisiting The Bell Curve. Some of it hasn't aged well. But even in
light of new evidence, the book offers a convincing argument that intelligence,
as measured by IQ tests, is a significant contributor to the social patterns we
see around us.
The key
insight behind IQ is that most mental tests have something in common: A person
who does well on one test is likely to do well on others too, even if the tests
are quite different in content. The correlation, though far from perfect, allows
psychologists to statistically identify the "factor" that the tests
share, and to measure it in individuals. There's a lot of debate over what
exactly this factor--g, or general intelligence--represents, and whether
it's better to break intelligence up into smaller categories, such as
"crystallized" and "fluid" (terms that refer, roughly, to
general knowledge and problem-solving ability).
But
whatever IQ is, it's difficult to change even with concerted effort, it relates
to the ability to process complex information, and differences between
individuals are partly genetic. All this was already well-established by the
time of The Bell Curve's publication. Most importantly, studies of twins and
adopted children had shown that sharing genes tends to give children similar
IQs, even if the kids don't also share a home environment. Herrnstein and
Murray reported estimates that IQ is 40 to 80 percent heritable, and assumed 60
percent in their calculations.
Twenty
years on, the evidence has gotten better, even if it remains difficult to nail
down a precise number. ("About half," wrote
three psychologists in Slate last month.) One recent study, rather
than focusing on siblings, analyzed the genomes and intelligence-test scores of
unrelated individuals, looking to see if individuals who were similar
genetically were also similar in intelligence. They were. The researchers
established lower-bound estimates that crystallized intelligence is 40 percent
heritable while fluid intelligence is 51 percent heritable. "Our results
unequivocally confirm that a substantial proportion of individual differences
in human intelligence is due to genetic variation," the team wrote.
Intelligence
seems to be the product of a large number of genes, each of which has very
little influence on the final result. This makes it difficult to tell exactly
which genes matter and how they work, but progress is steady. Just last month,
a study identified three genetic variants
that may increase IQ scores by about 0.3 points apiece by affecting a
"neurotransmitter pathway" in the brain.
But even
those who admit IQ is a measurable human trait might deny that it matters.
After all, one big predictor of where kids end up is how rich their parents
are. Can intelligence really make a difference when class is so strong?
To find
out, Herrnstein and Murray turned to the National Longitudinal Survey of Youth,
a government-funded study that has followed thousands of Americans since they
were adolescents and young adults in 1979. (Another wave began in 1997.)
Herrnstein and Murray estimated IQ scores based on these folks' performance on
the Armed Forces Qualifying Test, a battery of cognitive exams, and measured
parental socioeconomic status (SES) by constructing an index that took into
account income, occupation, and education. And to remove the effects of race, a
topic they avoided until the book's later chapters, Herrnstein and Murray
focused only on whites.
From
there, it's a statistical clash of the titans: The IQ and SES data are entered
into a single equation (along with age) and used to predict life outcomes such
as wages, welfare dependency, out-of-wedlock childbearing, and crime.
Overwhelmingly, IQ turns out to be a better predictor than SES. In other words,
people with high IQs tend to have better outcomes, even after SES has been
accounted for--and the results suggest that if you find yourself in a
science-fiction movie and get to choose between a high IQ and high-status
parents, you should go with the former.
Of
course, that people with high IQs have better outcomes doesn't prove that high
IQs cause better outcomes. But for many of the topics The Bell Curve
covers, skepticism along these lines borders on the absurd. Take wages, for
instance: It is obvious from casual observation, and supported by research,
that many of the highest-paying occupations (doctor, lawyer, accountant)
require an ability to deal with complex information. IQ also has clear benefits
when it comes to getting into and graduating from a demanding college--kids
with higher IQs do better in school at all levels, and the standardized tests
colleges use to screen applicants are highly correlated with IQ--or even
performing well in a more typical work environment.
And even in
the absence of causation, correlations can matter. We might be less than
positive that low IQ causes welfare dependency or crime, for instance--but
someone running a program to help the poor might want to know (from the
National Longitudinal Survey) that 45 percent of women who've received welfare
and 62 percent of men interviewed behind bars are in the bottom 20 percent of
the IQ distribution.
The Bell
Curve's findings on life outcomes were subject to some criticism, but
none of it invalidated the basic point that IQ matters. Here, for example, is a paper that used a more sophisticated
measure of SES (including family structure). The result? "Parental family
background is at least as important, and may be more important than IQ in
determining socioeconomic success in adulthood." So, the clashing titans
might be more evenly matched than Herrnstein and Murray claimed, and if you
find yourself in that science-fiction movie maybe you should just flip a coin.
Others noted that, statistically,
IQ explains a fairly low percentage of the variation in outcomes. This isn't a
criticism of the book so much as a quote from it. ("For virtually all of
the topics we will be discussing, cognitive ability accounts for only small to
middling proportions of the variation among people. It almost always explains
less than 20 percent of the variance, to use the statistician's term, usually
less than 10 percent and often less than 5 percent.") But more to the
point, if we're supposed to ignore IQ because IQ isn't statistically powerful
enough, we should probably ignore SES as meaningless too.
Of
course, few remember The Bell Curve as the book that made the case for
IQ by analyzing an all-white sample: They remember it as the book that argued
it was "highly likely" that the sizable black-white gap in IQ scores
is partly genetic. On this, The Bell Curve doesn't stand the test of
time--in 1994 it was very difficult to argue one way or the other, because our
knowledge of the human genome was weak and the other ways of approaching this
question are unsatisfactory.
Herrnstein
and Murray note, for example, that controlling for parental socioeconomic
status eliminates only about a third of the black-white gap--but this is
relevant only if blacks and whites with similarly situated parents tend to have
similar environments otherwise. They don't, for any number of reasons.
(Conservatives and even some liberals note
the cultural legacy of slavery and Jim Crow, and liberals have pointed out that blacks
with middle-class incomes are far more likely than comparable whites to live in
poor neighborhoods. Also: racism.) Herrnstein and Murray further note that the
gap is biggest on the tests that most closely measure g, but as they
concede, this doesn't prove that g has been suppressed by genes
specifically.
But if
the arguments in the book aren't looking so hot these days, many
arguments against the book are doing no better. A common refrain in the
wake of The Bell Curve's publication--it was still very much in vogue
when I attended college in the mid-2000s--was that the racial IQ gap couldn't
be genetic because race itself is a mere "social construct" with no
genetic basis. This theory has been thoroughly discredited (to
borrow a word from The Bell Curve's least thoughtful critics) in the
years since. We now know that some genetic variants are more common in some
racial groups than in others; what we don't know is whether or how those
variants influence intelligence. Even those who cling to the idea of race as a
"social construct" often admit this, making arguments
pertaining to the semantics of the word "race," not to the science of
human differences.
Two
decades ago we weren't close to knowing how the brain has evolved in the tens
of thousands of years since a small groups of humans left Africa to populate
the rest of the globe. We're a lot closer now, but there's still a long way to
go. Rather than speculating, we should stay calm, wait for the science to sort
itself out, and remember that statistical group differences can't justify the
poor treatment of individuals.
Another
controversial area Herrnstein and Murray wade into is the question of whether
human beings are getting smarter or duller over time. Certainly, IQ scores are
rising, at a rate of a few points per generation--a phenomenon the authors
christened "the Flynn Effect" in honor of James Flynn, an intelligence
researcher who has studied it in depth. This rate of change is so high that
it's certainly due to environmental changes, not genetics--and it's possible
that humans are actually getting duller on the genetic level. Herrnstein
and Murray argue this is indeed the case, pointing out that lower-IQ women tend
to have more children than their higher-IQ counterparts. Genetically, they
suggest, we're losing at least an IQ point per generation.
Unfortunately,
they provide policy recommendations to go along with this analysis. One is that
we should cut poverty relief for poor mothers to avoid encouraging them to have
children. You don't have to be chief of the PC Police--or lack concern about the welfare
state's tendency to enable irresponsible decisions--to find this horrifying.
Nonetheless,
research and public policy proceed apace. Flynn himself, in his 2007 book What
Is Intelligence?, cataloged a variety of odd patterns in the IQ data (such
as that scores are rising on some subtests but not others) and explained in
detail how the modern environment may have caused the effect that bears his
name. A paper last year attributed
the IQ gains to improvements in the way people apply rules and heuristics to
problems. As for genetics, recent research has speculated that human brain size is falling
because IQ is no longer the boon to survival and reproduction that it once was,
and even that modern people have lower IQs in some ways than
the Victorians did. On the policy level, another of Herrnstein and Murray's
suggestions for improving societal IQ--making birth control easily and cheaply
available--has become official government policy and remains a cause for many,
albeit for reasons (hopefully) having nothing to do with intelligence.
There is
much more of interest in The Bell Curve: Extensive data on affirmative
action, foreshadows of the arguments Murray would make in Real Education
and Coming Apart, a libertarian vision of how to create communities
where people with low IQs have a place. It is worth reading or re-reading
today. With 20 years' hindsight, the book's flaws are as apparent as ever. But
so are its merits.
No comments:
Post a Comment