Alison Gopnik The Wall Street Journal Columns
Mind & Matter, now once per month
(Click on the title for text, or on the date for link to The Wall Street Journal *)
The Many Minds of the Octopus (15 Apr 2021)
The Power of the Wandering Mind (25 Feb 2021)
Our Sense of Fairness Is Beyond Politics (21 Jan 2021)
2020
Despite Covid-19, Older People Are Still Happier (11 Dec 2020)
What AI Can Learn From Parents (5 Nov 2020)
Innovation Relies on Imitation (1 Oct 2020)
A Good Life Doesn't Mean an Easy One (28 Aug 2020)
Learning Without a Brain (23 Jul 2020)
Why Elders Are Indispensable for All of Us (12 Jun 2020)
How Humans Evolved to Care for Others (16 Apr 2020)
Detecting Fake News Takes Time (20 Feb 2020)
Humans Evolved to Love Baby Yoda (16 Jan 2020)
2019
Why the Old Look Down on the Young (5 Dec 2019)
Parents Need to Help Their Children Take Risks (24 Oct 2019)
Teenage Rebels with a Cause (12 Sep 2019)
How Early Do Cultural Differences Start? (11 Jul 2019)
The Explosive Evolution of Consciousness (5 Jun 2019)
Psychedelics as a Path to Social Learning (25 Apr 2019)
What AI Is Still Far From Figuring Out (20 Mar 2019)
Young Children Make Good Scientists (14 Feb 2019)
A Generational Divide in the Uncanny Valley (10 Jan 2019)
2018
For Gorillas, Being a Good Dad Is Sexy (30 Nov 2018)
The Cognitive Advantages of Growing Older (2 Nov 2018)
Imaginary Worlds of Childhood (20 Sep 2018)
Like Us, Whales May Be Smart Because They're Social (16 Aug 2018)
For Babies, Life May Be a Trip (18 Jul 2018)
Who's Most Afraid to Die? A Surprise (6 Jun 2018)
Curiosity Is a New Power in Artificial Intelligence (4 May 2018)
Grandparents: The Storytellers Who Bind Us (29 Mar 2018)
Are Babies Able to See What Others Feel? (22 Feb 2018)
What Teenagers Gain from Fine-Tuned Social Radar (18 Jan 2018)
2017
The Smart Butterfly's Guide to Reproduction (6 Dec 2017)
The Power of Pretending: What Would a Hero Do? (1 Nov 2017)
The Potential of Young Intellect, Rich or Poor (29 Sep 2017)
Do Men and Women Have Different Brains (25 Aug 2017)
Whales Have Complex Culture, Too (3 Aug 2017)
How to Get Old Brains to Think Like Young Ones (7 Jul 2017)
What the Blind See (and Don't) When Given Sight (8 Jun 2017)
How Much Do Toddlers Learn From Play? (11 May 2017)
The Science of 'I Was just Following Orders' (12 Apr 2017)
How Much Screen Time Is Safe for Teens? (17 Mar 2017)
When Children Beat Adults at Seeing the World (16 Feb 2017)
Flying High: Research Unveils Birds' Learning Power (18 Jan 2017)
2016
When Awe-Struck, We Feel Both Smaller and Larger (22 Dec 2016)
The Brain Machinery Behind Daydreaming (23 Nov 2016)
Babies Show a Clear Bias--To Learn New Things (26 Oct 2016)
Our Need to Make and Enforce Rules Starts Very Young (28 Sep 2016)
Should We Let Toddlers Play with Saws and Knives? (31 Aug 2016)
Want Babies to Learn from Video? Try Interactive (3 Aug 2016)
A Small Fix in Mind-Set Can Keep Students in School (16 Jun 2016)
Aliens Rate Earth: Skip the Primates, Come for the Crows (18 May 2016)
The Psychopath, the Altruist and the Rest of Us (21 Apr 2016)
Young Mice, Like Children, Can Grow Up Too Fast (23 Mar 2016)
How Babies Know That Allies Can Mean Power (25 Feb 2016)
To Console a Vole: A Rodent Cares for Others (26 Jan 2016)
Science Is Stepping Up the Pace of Innovation (1 Jan 2016)
2015
Giving Thanks for the Innovation That Saves Babies (25 Nov 2015)
Who Was That Ghost? Science's Reassuring Reply (28 Oct 2015)
Is Our Identity in Intellect, Memory or Moral Character? (9 Sep 2015)
Babies Make Predictions, Too (12 Aug 2015)
Aggression in Children Makes Sense - Sometimes (16 July 2015)
Smarter Every Year? Mystery of the Rising IQs (27 May 2015)
Brains, Schools and a Vicious Cycle of Poverty (13 May 2015)
The Mystery of Loyalty, in Life and on 'The Americans' (1 May 2015)
How 1-Year-Olds Figure Out the World (15 Apr 2015)
How Children Develop the Idea of Free Will (1 Apr 2015)
How We Learn to Be Afraid of the Right Things (18 Mar 2015)
Learning From King Lear: The Saving Grace of Low Status (4 Mar 2015)
The Smartest Questions to Ask About Intelligence (18 Feb 2015)
The Dangers of Believing that Talent Is Innate (4 Feb 2015)
What a Child Can Teach a Smart Computer (22 Jan 2015)
Why Digital-Movie Effects Still Can't Do a Human Face (8 Jan 2015)
2014
DNA and the Randomness of Genetic Problems (25 Nov 2014 - out of order)
How Children Get the Christmas Spirit (24 Dec 2014)
Who Wins When Smart Crows and Kids Match Wits? (10 Dec 2014)
How Humans Learn to Communicate with Their Eyes (19 Nov 2014)
A More Supportive World Can Work Wonders for the Aged (5 Nov 2014)
What Sends Teens Toward Triumph or Tribulation (22 Oct 2014)
Campfires Helped Inspire Community Culture (8 Oct 2014)
Poverty's Vicious Cycle Can Affect Our Genes (24 Sept 2014)
Humans Naturally Follow Crowd Behavior (12 Sept 2014)
Even Children Get More Outraged at 'Them' Than at 'Us' (27 Aug 2014)
In Life, Who WIns, the Fox or the Hedgehog? (15 Aug 2014)
Do We Know What We See? (31 July 2014)
Why Is It So Hard for Us to Do Nothing? (18 July 2014)
A Toddler's Souffles Aren't Just Child's Play (3 July 2014)
For Poor Kids, New Proof That Early help Is Key (13 June 2014)
Rice, Wheat and the Values They Sow (30 May 2014)
What Made Us Human? Perhaps Adorable Babies (16 May 2014)
Grandmothers: The Behind-the-Scenes Key to Human Culture? (2 May 2014)
See Jane Evolve: Picture Books Explain Darwin (18 Apr 2014)
Scientists Study Why Stories Exist (4 Apr 2014)
The Kid Who Wouldn't Let Go of 'The Device' (21 Mar 2014)
Why You're Not as Clever as a 4-Year-Old (7 Mar 2014)
Are Schools Asking to Drug Kids for Better Test Scores? (21 Feb 2014)
The Psychedelic Road to Other Conscious States (7 Feb 2014)
Time to Retire the Simplicity of Nature vs. Nurture (24 Jan 2014)
The Surprising Probability Gurus Wearing Diapers (10 Jan 2014)
2013
What Children Really Think About Magic (28 Dec 2013)
Trial and Error in Toddlers and Scientists (14 Dec 2013)
Gratitude for the Cosmic Miracle of A Newborn Child (29 Nov 2013)
The Brain's Crowdsourcing Software (16 Nov 2013)
World Series Recap: May Baseball's Irrational Heart Keep On Beating (2 Nov 2013)
Drugged-out Mice Offer Insight into the Growing Brain (4 Oct 2013)
Poverty Can Trump a Winning Hand of Genes (20 Sep 2013)
Is It Possible to Reason about Having a Child? (7 Sep 2013)
Even Young Children Adopt Arbitrary Rituals (24 Aug 2013)
The Gorilla Lurking in Our Consciousness (9 Aug 2013)
Does Evolution Want Us to Be Unhappy? (27 Jul 2013)
How to Get Children to Eat Veggies (13 Jul 2013)
What Makes Some Children More Resilient? (29 Jun 2013)
Wordsworth, The Child Psychologist (15 Jun 2013)
Zazes, Flurps and the Moral World of Kids (31 May 2013)
How Early Do We Learn Racial 'Us and Them'? (18 May 2013)
How the Brain Really Works (4 May 2013)
Culture Begets Marriage - Gay or Straight (21 Apr 2013)
For Innovation, Dodge the Prefrontal Police (5 Apr 2013)
Sleeping Like a Baby, Learning at Warp Speed (22 Mar 2013)
Why Are Our Kids Useless? Because We're Smart (8 Mar 2013)
Cephalopods are having a moment. An octopus stars in a documentary nominated for an Academy Award (“My Octopus Teacher”). Octos, as scuba-diving philosopher Peter Godfrey Smith calls them, also play a leading role in his marvelous new book “Metazoa,” alongside a supporting cast of corals, sponges, sharks and crabs. (I like Mr. Godfrey-Smith’s plural, which avoids the tiresome debate over Latin and Greek endings). Part of the allure of the octos is that they are both very smart, probably the smartest of invertebrates, and extremely weird. The intelligence and weirdness may be connected and can perhaps teach us something about those other intelligent, weird animals we call homo sapiens Smart birds and mammals tend to have long lives and an especially long, protected childhood. Crows and chimps put a lot of work into taking care of their helpless babies. But, sadly and strangely, the intelligent octos only live for a year and don’t really have a childhood at all. They die soon after reproducing and, like the spider heroine of “Charlotte’s Web,” don’t even live to see the next generation grow up, let alone to look after them. Smart birds and mammals also keep their neurons in one place—their brains. But octos split them up. They have over 500 million neurons altogether, about as many as dogs. But there are as many neurons altogether in their eight arms as in their heads. The arms seem able to act as independent agents, waving and wandering, exploring and sensing the world around them—even reaching out to the occasional diving philosopher or filmmaker. Mr. Godfrey-Smith’s book has a fascinating discussion of how it must feel to have this sort of split consciousness, nine selves all inhabiting the same body. I think there might be a link between these two strange facts of octopus life. I’ve previously argued that childhood and intelligence are correlated because of what computer scientists call the “explore-exploit” trade-off: It’s very difficult to design a single system that’s curious and imaginative—that is, good at exploring—and at the same time, efficient and effective—or good at exploiting. Childhood gives animals a chance to explore and learn first; then when they grow up, they can exploit what they’ve learned to get things done. Childhood isn’t the only way to solve the explore-exploit problem. Bees, who, like octos, are smart but short-lived, use a division of labor, with scouts who explore and workers who exploit. But octos are much more solitary than bees. The evolutionary path that led to the octos diverged from the human one hundreds of millions of years ago, before the first animal crawled out of the sea. They must have developed a different way to solve the explore-exploit dilemma. Perhaps their eight-plus-one brains serve the same function as the different phases of human development, or the different varieties of bees. The playful, exploratory arms can come under the control of the brain when it’s time to act—to mate, feed or flee. The head might feel kind of like a preschool teacher on an outing, trying to corral eight wandering children and to get them to their destination. (Imagine if your arms were as contrary as your 2-year-old!) We grown-up humans may not be so different. Human adults are “neotenous apes,” which means we retain more childhood characteristics than our primate relatives do. We keep our brains in our heads, but neuroscience and everyday experience suggest that we too have divided selves. My grown-up, efficient prefrontal cortex keeps my wandering, exploratory inner child in line. Or tries to, anyway. |
It might seem obvious that you need a brain to be intelligent, but a new area of research called “basal cognition” explores whether there are kinds of intelligence that don’t require neurons and synapses. Some of the research was reported in a special issue of the Philosophical Transactions of the Royal Society last year. These studies may help to answer deep questions about the nature and evolution of intelligence, but the experiments are also just plain fascinating, with truly weird creatures and even weirder results. Slime molds, for example, are very large single-celled organisms that can agglomerate into masses, creeping across the forest floor and feeding on decaying plants. (One type is called dog vomit slime mold, which gives you an idea of what they look like.) They can also retreat into a sort of freeze-dried capsule form, losing much of their protein and DNA in the process, and stay that way for months. But just add water and the reconstituted slime mold is good as new. They are also fussy eaters. If you put them down on top of their favorite meal of agar and Quaker oats and add salt or quinine to one part of it, they’ll avoid that part, at least at first. The biologists Aurele Bousard and Audrey Dussutour at the University of Toulouse and colleagues used this fact to show that slime molds can learn in a simple way called habituation. If the only way to get the oats is to eat the salt too, the molds eventually get used to it and stop objecting. Remarkably, this information somehow persists for up to a month, even through their period of dessicated hibernation. Flatworms are equally weird. Cut one into a hundred pieces and each piece will regenerate into a perfect new worm. (A slime mold-flatworm alliance against the humans would make a great horror movie). But how do the cells in the severed flatworm fragment know how to grow into a head and a tail? Santosh Manicka and Michael Levin of Tufts University argue in the special issue that regeneration involves a kind of cognition. The process is remarkably robust: You can move the cells that usually make a head to the tail location, and they will somehow figure out how to make a tail instead. The researchers argue that this ability to take multiple paths to achieve the same goal requires a kind of intelligence. Regeneration involves the standard mechanisms that allow the DNA in a cell to manufacture proteins. But Dr. Levin and his colleagues have shown that flatworm cells also communicate information through electricity, signaling to other nearby cells in much the way that neurons do. In experiments that would make Dr. Frankenstein proud, the researchers altered those electrical signals to produce a worm that consistently regenerates with two heads, or even one that grows the head of another related species of flatworm. This research has some practical implications: It would be great if human accident victims could grow back their limbs as easily as flatworms do. But the studies also speak to a profound biological and philosophical conundrum. Where do cognition and intelligence come from? How could natural selection turn single-celled amoebas into homo sapiens? Dr. Levin thinks that the electrical communications that help flatworms regenerate might have evolved into the subtler mechanisms of brain communication. Those creepy slime molds and flatworms might help to explain how humans got smart. |
Teenagers are paradoxical. That’s a mild and detached way of saying something that parents often express with considerably stronger language. But the paradox is scientific as well as personal. In adolescence, helpless and dependent children who have relied on grown-ups for just about everything become independent people who can take care of themselves and help each other. At the same time, once cheerful and compliant children become rebellious teenage risk-takers, often to the point of self-destruction. Accidental deaths go up dramatically in adolescence. A new study published in the journal Child Development, by Eveline Crone of the University of Leiden and colleagues, suggests that the positive and negative sides of teenagers go hand in hand. The study is part of a new wave of thinking about adolescence. For a long time, scientists and policy makers concentrated on the idea that teenagers were a problem that needed to be solved. The new work emphasizes that adolescence is a time of opportunity as well as risk. The researchers studied “prosocial” and rebellious traits in more than 200 children and young adults, ranging from 11 to 28 years old. The participants filled out questionnaires about how often they did things that were altruistic and positive, like sacrificing their own interests to help a friend, or rebellious and negative, like getting drunk or staying out late. Other studies have shown that rebellious behavior increases as you become a teenager and then fades away as you grow older. But the new study shows that, interestingly, the same pattern holds for prosocial behavior. Teenagers were more likely than younger children or adults to report that they did things like unselfishly help a friend. Most significantly, there was a positive correlation between prosociality and rebelliousness. The teenagers who were more rebellious were also more likely to help others. The good and bad sides of adolescence seem to develop together. Is there some common factor that underlies these apparently contradictory developments? One idea is that teenage behavior is related to what researchers call “reward sensitivity.” Decision-making always involves balancing rewards and risks, benefits and costs. “Reward sensitivity” measures how much reward it takes to outweigh risk. Teenagers are particularly sensitive to social rewards—winning the game, impressing a new friend, getting that boy to notice you. Reward sensitivity, like prosocial behavior and risk-taking, seems to go up in adolescence and then down again as we age. Somehow, when you hit 30, the chance that something exciting and new will happen at that party just doesn’t seem to outweigh the effort of getting up off the couch. The study participants filled out a separate “fun-seeking” questionnaire that measured reward sensitivity with statements like “I’m always willing to try something new if I think it will be fun.” This scale correlated with both prosociality and rebelliousness. What’s more, the researchers were able to track the responses of participants over a four-year period and found that those who had been most eager for experience when they were younger became the most rebellious teenagers—but also the most altruistic. This new research suggests that Cyndi Lauper was right: Girls (and boys) just wanna have fun, and that’s what makes them into paradoxically good and bad, rebellious and responsible teenagers. |
In 19th-century England, the Brontë children created Gondal, an imaginary kingdom full of melodrama and intrigue. Emily and Charlotte Brontë grew up to write the great novels “Wuthering Heights” and “Jane Eyre.” The fictional land of Narnia, chronicled by C.S. Lewis in a series of classic 20th-century novels, grew out of Boxen, an imaginary kingdom that Lewis shared with his brother when they were children. And when the novelist Anne Perry was growing up in New Zealand in the 1950s, she and another girl created an imaginary kingdom called Borovnia as part of an obsessive friendship that ended in murder—the film “Heavenly Creatures” tells the story. But what about Abixia? Abixia is an island nation on the planet Rooark, with its own currency (the iinter, divided into 12 skilches), flag and national anthem. It’s inhabited by cat-humans who wear flannel shirts and revere Swiss army knives—the detailed description could go on for pages. And it was created by a pair of perfectly ordinary Oregon 10-year-olds. Abixia is a “paracosm,” an extremely detailed and extensive imaginary world with its own geography and history. The psychologist Marjorie Taylor at the University of Oregon and her colleagues discovered Abixia, and many other worlds like it, by talking to children. Most of what we know about paracosms comes from writers who described the worlds they created when they were children. But in a paper forthcoming in the journal Child Development, Prof. Taylor shows that paracosms aren’t just the province of budding novelists. Instead, they are a surprisingly common part of childhood. Prof. Taylor asked 169 children, ages eight to 12, whether they had an imaginary world and what it was like. They found that about 17 percent of the children had created their own complicated universe. Often a group of children would jointly create a world and maintain it, sometimes for years, like the Brontë sisters or the Lewis brothers. And grown-ups were not invited in. Prof. Taylor also tried to find out what made the paracosm creators special. They didn’t score any higher than other children in terms of IQ, vocabulary, creativity or memory. Interestingly, they scored worse on a test that measured their ability to inhibit irrelevant thoughts. Focusing on the stern and earnest real world may keep us from wandering off into possible ones. But the paracosm creators were better at telling stories, and they were more likely to report that they also had an imaginary companion. In earlier research, Prof. Taylor found that around 66% of preschoolers have imaginary companions; many paracosms began with older children finding a home for their preschool imaginary friends. Children with paracosms, like children with imaginary companions, weren’t neurotic loners either, as popular stereotypes might suggest. In fact, if anything, they were more socially skillful than other children. Why do imaginary worlds start to show up when children are eight to 12 years old? Even when 10-year-olds don’t create paracosms, they seem to have a special affinity for them—think of all the young “Harry Potter” fanatics. And as Prof. Taylor points out, paracosms seem to be linked to all the private clubhouses, hidden rituals and secret societies of middle childhood. Prof. Taylor showed that preschoolers who create imaginary friends are particularly good at understanding other people’s minds—they are expert at everyday psychology. For older children, the agenda seems to shift to what we might call everyday sociology or geography. Children may create alternative societies and countries in their play as a way of learning how to navigate real ones in adult life. Of course, most of us leave those imaginary worlds behind when we grow up—the magic portals close. The mystery that remains is how great writers keep the doors open for us all. |
THE SMART BUTTERFLY'S GUIDE TO REPRODUCTION We humans have an exceptionally long childhood, generally bear just one child at a time and work hard to take care of our children. Is this related to our equally distinctive large brains and high intelligence? Biologists say that, by and large, the smarter species of primates and even birds mature later, have fewer babies, and invest more in those babies than do the dimmer species. “Intelligence” is defined, of course, from a human perspective. Plenty of animals thrive and adapt without a large brain or developed learning abilities. But how far in the animal kingdom does this relationship between learning and life history extend? Butterflies are about as different from humans as could be—laying hundreds of eggs, living for just a few weeks and possessing brains no bigger than the tip of a pen. Even by insect standards, they’re not very bright. A bug-loving biology teacher I know perpetually complains that foolish humans prefer pretty but vapid butterflies to her brilliant pet cockroaches. But entomologist Emilie Snell-Rood at the University of Minnesota and colleagues have found a similar relationship of learning to life-history in butterflies. The insects that are smarter have a longer period of immaturity and fewer babies. The research suggests that these humble creatures, which have existed for roughly 50 million years, can teach us something about how to adapt to a quickly changing world. Climate change or habitat loss drives some animals to extinction. But others alter the development of their bodies or behavior to suit a changing new environment, demonstrating what scientists call “developmental plasticity.” Dr. Snell-Rood wants to understand these fast adaptations, especially those caused by human influence. She has shown, for example, how road-salting has altered the development of curbside butterflies. Learning is a particularly powerful kind of plasticity. Cabbage white butterflies, the nemesis of the veggie gardener, flit from kale to cabbage to chard, in search of the best host for their eggs after they hatch and larvae start munching. In a 2009 paper in the American Naturalist, Dr. Snell-Rood found that all the bugs start out with a strong innate bias toward green plants such as kale. But some adventurous and intelligent butterflies may accidentally land on a nutritious red cabbage and learn that the red leaves are good hosts, too. The next day those smart insects will be more likely to seek out red, not just green, plants. In a 2011 paper in the journal Behavioral Ecology, Dr. Snell-Rood showed that the butterflies who were better learners also took longer to reach reproductive maturity, and they produced fewer eggs overall. When she gave the insects a hormone that sped up their development, so that they grew up more quickly, they were worse at learning. In a paper in the journal Animal Behaviour published this year, Dr. Snell-Rood looked at another kind of butterfly intelligence. The experimenters presented cabbage whites with a choice between leaves that had been grown with more or less fertilizer, and leaves that either did or did not have a dead, carefully posed cabbage white pinned to them. Some of the insects laid eggs all over the place. But some preferred the leaves that were especially nutritious. What’s more, these same butterflies avoided leaves that were occupied by other butterflies, where the eggs would face more competition. The choosier butterflies, like the good learners, produced fewer eggs overall. There was a trade-off between simply producing more young and taking the time and care to make sure those young survived. In genetic selection, an organism produces many kinds of offspring, and only the well-adapted survive. But once you have a brain that can learn, even a butterfly brain, you can adapt to a changing environment in a single generation. That will ensure more reproductive success in the long run. THE POTENTIAL OF YOUNG INTELLECT, RICH OR POOR Inequality starts early. In 2015, 23% of American children under 3 grew up in poverty, according to the Census Bureau. By the time children reach first grade, there are already big gaps, based on parents’ income, in academic skills like reading and writing. The comparisons look even starker when you contrast middle-class U.S. children and children in developing countries like Peru. Can schooling reverse these gaps, or are they doomed to grow as the children get older? Scientists like me usually study preschoolers in venues like university preschools and science museums. The children are mostly privileged, with parents who have given them every advantage and are increasingly set on giving instruction to even the youngest children. So how can we reliably test whether certain skills are the birthright of all children, rich or poor? My psychology lab at the University of California, Berkeley has been trying to provide at least partial answers, and my colleagues and I published some of the results in the Aug. 23 edition of the journal Child Development. Our earlier research has found that young children are remarkably good at learning. For example, they can figure out cause-and-effect relationships, one of the foundations of scientific thinking. How can we ask 4-year-olds about cause and effect? We use what we call “the blicket detector”—a machine that lights up when you put some combinations of different-shaped blocks on it but not others. The subjects themselves don’t handle the blocks or the machine; an experimenter demonstrates it for them, using combinations of one and two blocks. In the training phase of the experiment, some of the young children saw a machine that worked in a straightforward way—some individual blocks made it go and others didn’t. The rest of the children observed a machine that worked in a more unusual way—only a combination of two specific blocks made it go. We also used the demonstration to train two groups of adults. Could the participants, children and adults alike, use the training data to figure out how a new set of blocks worked? The very young children did. If the training blocks worked the unusual way, they thought that the new blocks would also work that way, and they used that assumption to determine which specific blocks caused the machine to light up. Butmost of the adults didn’t get it—they stuck with the obvious ideathat only one block was needed to make the machine run. In the Child Development study of 290 children, we set out to see what less-privileged children would do. We tested 4-year-old Americans in preschools for low-income children run by the federal Head Start program, which also focuses on health, nutrition and parent involvement. These children did worse than middle-class children on vocabulary tests and “executive function”—the ability to plan and focus. But the poorer children were just as good as their wealthier counterparts at finding the creative answer to the cause-and effect problems. Then, in Peru, we studied 4-year-olds in schools serving families who mostly have come from the countryside and settled in the outskirts of Lima, and who have average earnings of less than $12,000 a year. These children also did surprisingly well. They solved even the most difficult tasks as well as the middle-class U.S. children (and did better than adults in Peru or the U.S.). Though the children we tested weren’t from wealthy families, their parents did care enough to get them into preschool. We didn’t look at how children with less social support would do. But the results suggest that you don’t need middle-class enrichment to be smart. All children may be born with the ability to think like creative scientists. We need to make sure that those abilities are nurtured, not neglected. WHALES HAVE COMPLEX CULTURE, TOO How does a new song go viral, replacing the outmoded hits of a few years ago? How are favorite dishes passed on through the generations, from grandmother to grandchild? Two new papers in the Proceedings of the National Academy of Sciences examine the remarkable and distinctive ability to transmit culture. The studies describe some of the most culturally sophisticated beings on Earth. Or, to be more precise, at sea. Whales and other cetaceans, such as dolphins and porpoises, turn out to have more complex cultural abilities than any other animal except us. For a long time, people thought that culture was uniquely human. But new studies show that a wide range of animals, from birds to bees to chimpanzees, can pass on information and behaviors to others. Whales have especially impressive kinds of culture, which we are only just beginning to understand, thanks to the phenomenal efforts of cetacean specialists. (As a whale researcher once said to me with a sigh, “Just imagine if each of your research participants was the size of a 30-ton truck.”) One of the new studies, by Ellen Garland of the University of St. Andrews in Scotland and her colleagues, looked at humpback whale songs. Only males sing them, especially in the breeding grounds, which suggests that music is the food of love for cetaceans, too—though the exact function of the songs is still obscure. The songs, which can last for as long as a half-hour, have a complicated structure, much like human language or music. They are made up of larger themes constructed from shorter phrases, and they have the whale equivalent of rhythm and rhyme. Perhaps that’s why we humans find them so compelling and beautiful. The songs also change as they are passed on, like human songs. All the male whales in a group sing the same song, but every few years the songs are completely transformed. Researchers have trailed the whales across the Pacific, recording their songs as they go. The whales learn the new songs from other groups of whales when they mingle in the feeding grounds. But how? The current paper looked at an unusual set of whales that produced rare hybrid songs—a sort of mashup of songs from different groups. Hybrids showed up as the whales transitioned from one song to the next. The hybrids suggested that the whales weren’t just memorizing the songs as a single unit. They were taking the songs apart and putting them back together, creating variations using the song structure. The other paper, by Hal Whitehead of Dalhousie University in Halifax, Nova Scotia, looked at a different kind of cultural transmission in another species, the killer whale. The humpback songs spread horizontally, passing from one virile young thing to the next, like teenage fashions. But the real power of culture comes when caregivers can pass on discoveries to the next generation. That sort of vertical transmission is what gives human beings their edge. Killer whales stay with their mothers for as long as the mothers live, and mothers pass on eating traditions. In the same patch of ocean, you will find some whales that only eat salmon and other whales that only eat mammals, and these preferences are passed on from mother to child. Even grandmothers may play a role. Besides humans, killer whales are the only mammal whose females live well past menopause. Those old females help to ensure the survival of their offspring, and they might help to pass on a preference for herring or shark to their grandchildren, too. (That may be more useful than my grandchildren’s legacy—a taste for Montreal smoked meat and bad Borscht Belt jokes.) Dr. Whitehead argues that these cultural traditions may even lead to physical changes. As different groups of whales become isolated from each other, the salmon eaters in one group and the mammal eaters in another, there appears to be a genetic shift affecting things such as their digestive abilities. The pattern should sound familiar: It’s how the cultural innovation of dairy farming led to the selection of genes for lactose-tolerance in humans. Even in whales, culture and nature are inextricably entwined. WHAT THE BLIND SEE (AND DON'T) WHEN GIVEN SIGHT In September 1678, a brilliant young Irish scientist named William Molyneux married the beautiful Lucy Domville. By November she had fallen ill and become blind, and the doctors could do nothing for her. Molyneux reacted by devoting himself to the study of vision. He also studied vision because he wanted to resolve some big philosophical issues: What kinds of knowledge are we born with? What is learned? And does that learning have to happen at certain stages in our lives? In 1688 he asked the philosopher John Locke:Suppose someone who was born blind suddenly regained their sight? What would they understand about the visual world? In the 17th century, Molyneux’s question was science fiction. Locke and his peers enthusiastically debated and speculated about the answer, but there was no way to actually restore a blind baby’s sight. That’s no longer true today. Some kinds of congenital blindness, such as congenital cataracts, can be cured. More than 300 years after Molyneux, another brilliant young scientist, Pawan Sinha of the Massachusetts Institute of Technology, has begun to find answers to his predecessor’s questions. Dr. Sinha has produced a substantial body of research, culminating in a paper last month in the Proceedings of the National Academy of Sciences. Like Molyneux, he was moved by both philosophical questions and human tragedy. When he was growing up, Dr. Sinha saw blind children begging on the streets of New Delhi. So in 2005 he helped to start Project Prakash, from the Sanskrit word for light. Prakash gives medical attention to blind children and teenagers in rural India. To date, the project has helped to treat more than 1,400 children, restoring sight to many. Project Prakash has also given scientists a chance to answer Molyneux’s questions: to discover what we know about the visual world when we’re born, what we learn and when we have to learn it. Dr. Sinha and his colleagues discovered that some abilities that might seem to be learned show up as soon as children can see. For example, consider the classic Ponzo visual illusion. When you see two equal horizontal lines drawn on top of a perspective drawing of receding railway ties, the top line will look much longer than the bottom one. You might have thought that illusion depends on learning about distance and perspective, but the newly sighted children immediately see the lines the same way. On the other hand, some basic visual abilities depend more on experience at a critical time. When congenital cataracts are treated very early, children tend to develop fairly good visual acuity—the ability to see fine detail. Children who are treated much later don’t tend to develop the same level of acuity, even after they have had a lot of visual experience. In the most recent study, Dr. Sinha and colleagues looked at our ability to tell the difference between faces and other objects. People are very sensitive to faces; special brain areas are dedicated to face perception, and babies can discriminate pictures of faces from other pictures when they are only a few weeks old. The researchers studied five Indian children who were part of the Prakash project, aged 9 to 17, born blind but given sight. At first they couldn’t distinguish faces from similar pictures. But over the next few months they learned the skill and eventually they did as well as sighted children. So face detection had a different profile from both visual illusions and visual acuity—it wasn’t there right away, but it could be learned relatively quickly. The moral of the story is that the right answer about nature versus nurture is…it’s complicated. And that sometimes, at least, searching for the truth can go hand-in-hand with making the world a better place. THE SCIENCE OF 'I WAS JUST FOLLOWING ORDERS' There is no more chilling wartime phrase than “I was just following orders.” Surely, most of us think, someone who obeys a command to commit a crime is still acting purposely, and following orders isn’t a sufficient excuse. New studies help to explain how seemingly good people come to do terrible things in these circumstances: When obeying someone else, they do indeed often feel that they aren’t acting intentionally. Patrick Haggard, a neuroscientist at University College London, has been engaged for years in studying our feelings of agency and intention. But how can you measure them objectively? Asking people to report such an elusive sensation is problematic. Dr. Haggard found another way. In 2002 he discovered that intentional action has a distinctive but subtle signature: It warps your sense of time. People can usually perceive the interval between two events quite precisely, down to milliseconds. But when you act intentionally to make something happen—say, you press a button to make a sound play—your sense of time is distorted. You think that the sound follows your action more quickly than it actually does—a phenomenon called “intentional binding.” Your sense of agency somehow pulls the action and the effect together. This doesn’t happen if someone else presses your finger to the button or if electrical stimulation makes your finger press down involuntarily. And this distinctive time signature comes with a distinctive neural signature too. More recent studies show that following instructions can at times look more like passive, involuntary movement than like willed intentional action. In the journal Psychological Science last month, Peter Lush of the University of Sussex, together with colleagues including Dr. Haggard, examined hypnosis. Hypnosis is puzzling because people produce complicated and surely intentional actions—for example, imitating a chicken—but insist that they were involuntary. The researchers hypnotized people and then suggested that they press a button making a sound. The hypnotized people didn’t show the characteristic time-distortion signature of agency. They reported the time interval between the action and the sound accurately, as if someone else had pressed their finger down. Hypnosis really did make the actions look less intentional. In another study, Dr. Haggard and colleagues took off from the famous Milgram experiments of the 1960s. Social psychologist Stanley Milgram discovered that ordinary people were willing to administer painful shocks to someone else simply because the experimenter told them to. In Dr. Haggard’s version, reported in the journal Current Biology last year, volunteers did the experiment in pairs. If they pressed a button, a sound would play, the other person would get a brief but painful shock and they themselves would get about $20; each “victim” later got a chance to shock the aggressor. Sometimes the participants were free to choose whether or not to press the button, and they shocked the other person about half the time. At other times the experimenter told the participants what to do. In the free-choice trials, the participants showed the usual “intentional binding” time distortion: They experienced the task as free agents. Their brain activity, recorded by an electroencephalogram, looked intentional too. But when the experimenter told participants to shock the other person, they did not show the signature of intention, either in their time perception or in their brain responses. They looked like people who had been hypnotized or whose finger was moved for them, not like people who had set out to move their finger themselves. Following orders was apparently enough to remove the feeling of free will. These studies leave some big questions. When people follow orders, do they really lose their agency or does it just feel that way? Is there a difference? Most of all, what can we do to ensure that this very human phenomenon doesn’t lead to more horrific inhumanity in the future? WHEN CHILDREN BEAT ADULTS AT SEEING THE WORLD A few years ago, in my book “The Philosophical Baby,” I speculated that children might actually be more conscious, or at least more aware of their surroundings, than adults. Lots of research shows that we adults have a narrow “spotlight” of attention. We vividly experience the things that we focus on but are remarkably oblivious to everything else. There’s even a term for it: “inattentional blindness.” I thought that children’s consciousness might be more like a “lantern,” illuminating everything around it. When the book came out, I got many fascinating letters about how children see more than adults. A store detective described how he would perch on an upper balcony surveying the shop floor. The grown-ups, including the shoplifters, were so focused on what they were doing that they never noticed him. But the little children, trailing behind their oblivious parents, would glance up and wave. Of course, anecdotes and impressions aren’t scientific proof. But a new paper in press in the journal Psychological Science suggests that the store detective and I just might have been right. One of the most dramatic examples of the adult spotlight is “change blindness.” You can show people a picture, interrupt it with a blank screen, and then show people the same picture with a change in the background. Even when you’re looking hard for the change, it’s remarkably difficult to see, although once someone points it out, it seems obvious. You can see the same thing outside the lab. Movie directors have to worry about “continuity” problems in their films because it’s so hard for them to notice when something in the background has changed between takes. To study this problem, Daniel Plebanek and Vladimir Sloutsky at Ohio State University tested how much children and adults notice about objects and how good they are at detecting changes. The experimenters showed a series of images of green and red shapes to 34 children, age 4 and 5, and 35 adults. The researchers asked the participants to pay attention to the red shapes and to ignore the green ones. In the second part of the experiment, they showed another set of images of red and green shapes to participants and asked: Had the shapes remained the same or were they different? Adults were better than children at noticing when the red shapes had changed. That’s not surprising: Adults are better at focusing their attention and learning as a result. But the children beat the adults when it came to the green shapes. They had learned more about the unattended objects than the adults and noticed when the green shapes changed. In other words, the adults only seemed to learn about the object in their attentional spotlight, but the children learned about the background, too. We often say that young children are bad at paying attention. But what we really mean is that they’re bad at not paying attention, that they don’t screen out the world as grown-ups do. Children learn as much as they can about the world around them, even if it means that they get distracted by the distant airplane in the sky or the speck of paper on the floor when you’re trying to get them out the door to preschool. Grown-ups instead focus and act effectively and swiftly, even if it means ignoring their surroundings. Children explore, adults exploit. There is a moral here for adults, too. We are often so focused on our immediate goals that we miss unexpected developments and opportunities. Sometimes by focusing less, we can actually see more. So if you want to expand your consciousness, you can try psychedelic drugs, mysticism or meditation. Or you can just go for a walk with a 4-year-old. WHEN AWE-STRUCK, WE FEEL BOTH SMALLER AND LARGER I took my grandchildren this week to see “The Nutcracker.” At the crucial moment in the ballet, when the Christmas tree magically expands, my 3-year-old granddaughter, her head tilted up, eyes wide, let out an impressive, irrepressible “Ohhhh!” The image of that enchanted tree captures everything marvelous about the holiday, for believers and secular people alike. The emotion that it evokes makes braving the city traffic and crowds worthwhile. What the children, and their grandmother, felt was awe—that special sense of the vastness of nature, the universe, the cosmos, and our own insignificance in comparison. Awe can be inspired by a magnificent tree or by Handel’s “Hallelujah Chorus” or by Christmas Eve mass in the Notre-Dame de Paris cathedral. But why does this emotion mean so much to us? Dacher Keltner, a psychologist who teaches (as I do) at the University of California, Berkeley, has been studying awe for 15 years. He and his research colleagues think that the emotion is as universal as happiness or anger and that it occurs everywhere with the same astonished gasp. In one study Prof. Keltner participated in, villagers in the Himalayan kingdom of Bhutan who listened to a brief recording of American voices immediately recognized the sound of awe. Prof. Keltner’s earlier research has also shown that awe is good for us and for society. When people experience awe—looking up at a majestic sequoia, for example—they become more altruistic and cooperative. They are less preoccupied by the trials of daily life. Why does awe have this effect? A new study, by Prof. Keltner, Yang Bai and their colleagues, conditionally accepted in the Journal of Personality and Social Psychology, shows how awe works its magic. Awe’s most visible psychological effect is to shrink our egos, our sense of our own importance. Ego may seem very abstract, but in the new study the researchers found a simple and reliable way to measure it. The team showed their subjects seven circles of increasing size and asked them to pick the one that corresponded to their sense of themselves. Those who reported feeling more important or more entitled selected a bigger circle; they had bigger egos. The researchers asked 83 participants from the U.S. and 88 from China to keep a diary of their emotions. It turned out that, on days when they reported feeling awe, they selected smaller circles to describe themselves. Then the team arranged for more than a thousand tourists from many countries to do the circle test either at the famously awe-inspiring Yosemite National Park or at Fisherman’s Wharf on San Francisco’s waterfront, a popular but hardly awesome spot. Only Yosemite made participants from all cultures feel smaller. Next, the researchers created awe in the lab, showing people awe-inspiring or funny video clips. Again, only the awe clips shrank the circles. The experimenters also asked people to draw circles representing themselves and the people close to them—with the distance between circles indicating how close they felt to others. Feelings of awe elicited more and closer circles; the awe-struck participants felt more social connection to others. The team also asked people to draw a ladder and represent where they belonged on it—a reliable measure of status. Awe had no effect on where people placed themselves on this ladder—unlike an emotion such as shame, which takes people down a notch in their own eyes. Awe makes us feel less egotistical, but at the same time it expands our sense of well-being rather than diminishing it. The classic awe-inspiring stimuli in these studies remind people of the vastness of nature: tall evergreens or majestic Yosemite waterfalls. But even very small stimuli can have the same effect. Another image of this season, a newborn child, transcends any particular faith, or lack of faith, and inspires awe in us all. BABIES SHOW A CLEAR BIAS--TO LEARN NEW THINGS Why do we like people like us? We take it for granted that grown-ups favor the “in-group” they belong to and that only the hard work of moral education can overcome that preference. There may well be good evolutionary reasons for this. But is it a scientific fact that we innately favor our own? A study in 2007, published in the Proceedings of the National Academy of Sciences by Katherine Kinzler and her colleagues, suggested that even babies might prefer their own group. The authors found that 10-month-olds preferred to look at people who spoke the same language they did. In more recent studies, researchers have found that babies also preferred to imitate someone who spoke the same language. So our preference for people in our own group might seem to be part of human nature. But a new study in the same journal by Katarina Begus of Birkbeck, University of London and her colleagues suggests a more complicated view of humanity. The researchers started out exploring the origins of curiosity. When grown-ups think that they are about to learn something new, their brains exhibit a pattern of activity called a theta wave. The researchers fitted out 45 11-month-old babies with little caps covered with electrodes to record brain activity. The researchers wanted to see if the babies would also produce theta waves when they thought that they might learn something new. The babies saw two very similar-looking people interact with a familiar toy like a rubber duck. One experimenter pointed at the toy and said, “That’s a duck.” The other just pointed at the object and instead of naming it made a noise: She said “oooh” in an uninformative way. Then the babies saw one of the experimenters pick up an unfamiliar gadget. You would expect that the person who told you the name of the duck could also tell you about this new thing. And, sure enough, when the babies saw the informative experimenter, their brains produced theta waves, as if they expected to learn something. On the other hand, you might expect that the experimenter who didn’t tell you anything about the duck would also be unlikely to help you learn more about the new object. Indeed, the babies didn’t produce theta waves when they saw this uninformative person. This experiment suggested that the babies in the earlier 2007 study might have been motivated by curiosity rather than by bias. Perhaps they preferred someone who spoke their own language because they thought that person could teach them the most. So to test this idea, the experimenters changed things a little. In the first study, one experimenter named the object, and the other didn’t. In the new study, one experimenter said “That’s a duck” in English—the babies’ native language—while the other said, “Mira el pato,” describing the duck in Spanish—an unfamiliar language. Sure enough, their brains produced theta waves only when they saw the English speaker pick up the new object. The babies responded as if the person who spoke the same language would also tell them more about the new thing. So 11-month-olds already are surprisingly sensitive to new information. Babies leap at the chance to learn something new—and can figure out who is likely to teach them. The babies did prefer the person in their own group, but that may have reflected curiosity, not bias. They thought that someone who spoke the same language could tell them the most about the world around them. There is no guarantee that our biological reflexes will coincide with the demands of morality. We may indeed have to use reason and knowledge to overcome inborn favoritism toward our own group. But the encouraging message of the new study is that the desire to know—that keystone of human civilization—may form a deeper part of our nature than mistrust and discrimination. SHOULD WE LET TODDLERS PLAY WITH SAWS AND KNIVES? Last week, I stumbled on a beautiful and moving picture of young children learning. It’s a fragment of a silent 1928 film from the Harold E. Jones Child Study Center in Berkeley, Calif., founded by a pioneer in early childhood education. The children would be in their 90s now. But in that long-distant idyll, in their flapper bobs and old-fashioned smocks, they play (cautiously) with a duck and a rabbit, splash through a paddling pool, dig in a sandbox, sing and squabble. Suddenly, I had a shock. A teacher sawed a board in half, and a boy, surely no older than 5, imitated him with his own saw, while a small girl hammered in nails. What were the teachers thinking? Why didn’t somebody stop them? My 21st-century reaction reflects a very recent change in the way that we think about children, risk and learning. In a recent paper titled “Playing with Knives” in the journal Child Development, the anthropologist David Lancy analyzed how young children learn across different cultures. He compiled a database of anthropologists’ observations of parents and children, covering over 100 preindustrial societies, from the Dusan in Borneo to the Pirahã in the Amazon and the Aka in Africa. Then Dr. Lancy looked for commonalities in what children and adults did and said. In recent years, the psychologist Joseph Henrich and colleagues have used the acronym WEIRD—that is, Western, educated, industrialized, rich and democratic—to describe the strange subset of humans who have been the subject of almost all psychological studies. Dr. Lancy’s paper makes the WEIRDness of our modern attitudes toward children, for good or ill, especially vivid. He found some striking similarities in the preindustrial societies that he analyzed. Adults take it for granted that young children are independently motivated to learn and that they do so by observing adults and playing with the tools that adults use—like knives and saws. There is very little explicit teaching. And children do, in fact, become competent surprisingly early. Among the Maniq hunter-gatherers in Thailand, 4-year-olds skin and gut small animals without mishap. In other cultures, 3- to 5-year-olds successfully use a hoe, fishing gear, blowpipe, bow and arrow, digging stick and mortar and pestle. The anthropologists were startled to see parents allow and even encourage their children to use sharp tools. When a Pirahã toddler played with a sharp 9-inch knife and dropped it on the ground, his mother, without interrupting her conversation, reached over and gave it back to him. Dr. Lancy concludes: “Self-initiated learners can be seen as a source for both the endurance of culture and of change in cultural patterns and practices.” He notes that, of course, early knife skills can come at the cost of severed fingers. To me, like most adults in my WEIRD culture, that is far too great a risk even to consider. But trying to eliminate all such risks from children’s lives also might be dangerous. There may be a psychological analog to the “hygiene hypothesis” proposed to explain the dramatic recent increase in allergies. Thanks to hygiene, antibiotics and too little outdoor play, children don’t get exposed to microbes as they once did. This may lead them to develop immune systems that overreact to substances that aren’t actually threatening—causing allergies. In the same way, by shielding children from every possible risk, we may lead them to react with exaggerated fear to situations that aren’t risky at all and isolate them from the adult skills that they will one day have to master. We don’t have the data to draw firm causal conclusions. But at least anecdotally, many young adults now seem to feel surprisingly and irrationally fragile, fearful and vulnerable: I once heard a high schooler refuse to take a city bus “because of liability issues.” Drawing the line between allowing foolhardiness and inculcating courage isn’t easy. But we might have something to learn from the teachers and toddlers of 1928. A SMALL FIX IN MIND-SET CAN KEEP STUDENTS IN SCHOOL Education is the engine of social mobility and equality. But that engine has been sputtering, especially for the children who need help the most. Minority and disadvantaged children are especially likely to be suspended from school and to drop out of college. Why? Is it something about the students or something about the schools? And what can we do about it? Two recent studies published in the Proceedings of the National Academy of Sciences offer some hope. Just a few brief, inexpensive, online interventions significantly reduced suspension and dropout rates, especially for disadvantaged groups. That might seem surprising, but it reflects the insights of an important new psychological theory. The psychologist Carol Dweck at Stanford has argued that both teachers and students have largely unconscious “mind-sets”—beliefs and expectations—about themselves and others and that these can lead to a cascade of self-fulfilling prophecies. A teacher may start out, for example, being just a little more likely to think that an African-American student will be a troublemaker. That makes her a bit more punitive in disciplining that student. The student, in turn, may start to think that he is being treated unfairly, so he reacts to discipline with more anger, thus confirming the teacher’s expectations. She reacts still more punitively, and so on. Without intending to, they can both end up stuck in a vicious cycle that greatly amplifies what were originally small biases. In the same way, a student who is the first in her family to go to college may be convinced that she won’t be able to fit in socially or academically. When she comes up against the inevitable freshman hurdles, she interprets them as evidence that she is doomed to fail. And she won’t ask for help because she feels that would just make her weakness more obvious. She too ends up stuck in a vicious cycle. Changing mind-sets is hard—simply telling people that they should think differently often backfires. The two new studies used clever techniques to get them to take on different mind-sets more indirectly. The studies are also notable because they used the gold-standard method of randomized, controlled trials, with over a thousand participants total. In the first study, by Jason Okonofua, David Paunesku and Greg Walton at Stanford, the experimenters asked a group of middle-school math teachers to fill out a set of online materials at the start of school. The materials described vivid examples of how you could discipline students in a respectful, rather than a punitive, way. But the most important part was a section that asked the teachers to provide examples of how they themselves used discipline respectfully. The researchers told the participants that those examples could be used to train others—treating the teachers as experts with something to contribute. Another group of math teachers got a control questionnaire about using technology in the classroom. At the end of the school year, the teachers who got the first package had only half as many suspensions as the control group—a rate of 4.6% compared with 9.8%. In the other study, by Dr. Dweck and her colleagues, the experimenters gave an online package to disadvantaged students from a charter school who were about to enter college. One group got materials saying that all new students had a hard time feeling that they belonged but that those difficulties could be overcome. The package also asked the students to write an essay describing how those challenges could be met—an essay that could help other students. A control group answered similar questions about navigating buildings on the campus. Only 32% of the control group were still enrolled in college by the end of the year, but 45% of the students who got the mind-set materials were enrolled. The researchers didn’t tell people to have a better attitude. They just encouraged students and teachers to articulate their own best impulses. That changed mind-sets—and changed lives. THE PSYCHOPATH, THE ALTRUIST AND THE REST OF US One day in 2006, Paul Wagner donated one of his kidneys to a stranger with kidney failure. Not long before, he had been reading the paper on his lunch break at a Philadelphia company and saw an article about kidney donation. He clicked on the website and almost immediately decided to donate. One day in 2008, Scott Johnson was sitting by a river in Michigan, feeling aggrieved at the world. He took out a gun and killed three teenagers who were out for a swim. He showed no remorse or guilt—instead, he talked about how other people were always treating him badly. In an interview, Mr. Johnson compared his killing spree to spilling a glass of milk. These events were described in two separate, vivid articles in a 2009 issue of the New Yorker. Larissa MacFarquhar, who wrote about Mr. Wagner, went on to include him in her wonderful recent book about extreme altruists, “Strangers Drowning.” For most of us, the two stories are so fascinating because they seem almost equally alien. It’s hard to imagine how someone could be so altruistic or so egotistic, so kind or so cruel. The neuroscientist Abigail Marsh at Georgetown University started out studying psychopaths—people like Scott Johnson. There is good scientific evidence that psychopaths are very different from other kinds of criminals. In fact, many psychopaths aren’t criminals at all. They can be intelligent and successful and are often exceptionally charming and charismatic. Psychopaths have no trouble understanding how other people’s minds work; in fact, they are often very good at manipulating people. But from a very young age, they don’t seem to respond to the fear or distress of others. Psychopaths also show distinctive patterns of brain activity. When most of us see another person express fear or distress, the amygdala—a part of our brain that is important for emotion—becomes particularly active. That activity is connected to our immediate, intuitive impulse to help. The brains of psychopaths don’t respond to someone else’s fear or distress in the same way, and their amygdalae are smaller overall. But we know much less about extreme altruists like Paul Wagner. So in a study with colleagues, published in 2014 in Proceedings of the National Academy of Sciences, Dr. Marsh looked at the brain activity of people who had donated a kidney to a stranger. Like Mr. Wagner, most of these people said that they had made the decision immediately, intuitively, almost as soon as they found out that it was possible. The extreme altruists showed exactly the opposite pattern from the psychopaths: The amygdalae of the altruists were larger than normal, and they activated more in response to a face showing fear. The altruists were also better than typical people at detecting when another person was afraid. These brain studies suggest that there is a continuum in how we react to other people, with the psychopaths on one end of the spectrum and the saints at the other. We all see the world from our own egotistic point of view, of course. The poet Philip Larkin once wrote: “Yours is the harder course, I can see. On the other hand, mine is happening to me.” But for most of us, that perspective is extended to include at least some other people, though not all. We see fear or distress on the faces of those we love, and we immediately, intuitively, act to help. No one is surprised when a mother donates her kidney to her child. The psychopath can’t seem to feel anyone’s needs except his own. The extreme altruist feels everybody’s needs. The rest of us live, often uneasily and guiltily, somewhere in the middle. HOW BABIES KNOW THAT ALLIES CAN MEAN POWER This year, in elections all across the country, individuals will compete for various positions of power. The one who gets more people to support him or her will prevail. Democratic majority rule, the idea that the person with more supporters should win, may be a sophisticated and relatively recent political invention. But a new study in the Proceedings of the National Academy of Sciences suggests that the idea that the majority will win is much deeper and more fundamental to our evolution. Andrew Scott Baron and colleagues at the University of British Columbia studied some surprisingly sophisticated political observers and prognosticators. It turns out that even 6-month-old babies predict that the guy with more allies will prevail in a struggle. They are pundits in diapers. How could we possibly know this? Babies will look longer at something that is unexpected or surprising. Developmental researchers have exploited this fact in very clever ways to figure out what babies think. In the Scott Baron study, the experimenters showed 6- to 9-month-old babies a group of three green simplified cartoon characters and two blue ones (the colors were different on different trials). Then they showed the babies a brief cartoon of one of the green guys and one of the blue guys trying to cross a platform that only had room for one character at a time, like Robin Hood and Little John facing off across a single log bridge. Which character would win and make it across the platform? The babies looked longer when the blue guy won. They seemed to expect that the green guy, the guy with more buddies, would win, and they were surprised when the guy from the smaller group won instead. In a 2011 study published in the journal Science, Susan Carey at Harvard and her colleagues found that 9-month-olds also think that might makes right: The babies expected that a physically bigger character would win out over a smaller one. But the new study showed that babies also think that allies are even more important than mere muscle. The green guy and the blue guy on the platform were the same size. And the green guy’s allies were actually a little smaller than the blue guy’s friends. But the babies still thought that the character who had two friends would win out over the character who had just one, even if those friends were a bit undersized. What’s more, the babies only expected the big guys to win once they were about 9 months old. But they already thought the guy with more friends would win when they were just 6 months old. This might seem almost incredible: Six-month-olds, after all, can’t sit up yet, let alone caucus or count votes. But the ability may make evolutionary sense. Chimpanzees, our closest primate relatives, have sophisticated political skills. A less powerful chimp who calls on several other chimps for help can overthrow even the most ferociously egocentric silverback. Our human ancestors made alliances, too. It makes sense that even young babies are sensitive to the size of social groups and the role they play in power. We often assume that politics is a kind of abstract negotiation between autonomous individual interests—voters choose candidates because they think those candidates will enact the policies they want. But the new studies of the baby pundits suggest a different picture. Alliance and dominance may be more fundamental human concepts than self-interest and negotiation. Even grown-up voters may be thinking more about who belongs to what group, or who is top dog, than who has the best health-care plan or tax scheme. SCIENCE IS STEPPING UP THE PACE OF INNOVATION Every year on the website Edge, scientists and other thinkers reply to one question. This year it’s “What do you consider the most interesting recent news” in science? The answers are fascinating. We’re used to thinking of news as the events that happen in a city or country within a few weeks or months. But scientists expand our thinking to the unimaginably large and the infinitesimally small. Despite this extraordinary range, the answers of the Edge contributors have an underlying theme. The biggest news of all is that a handful of large-brained primates on an insignificant planet have created machines that let them understand the world, at every scale, and let them change it too, for good or ill. Here is just a bit of the scientific news. The Large Hadron Collider—the giant particle accelerator in Geneva—is finally fully functional. So far the new evidence from the LHC has mostly just confirmed the standard model of physics, which helps explain everything from the birth of time to the end of the world. But at the tiny scale of the basic particles it is supposed to investigate, the Large Hadron Collider has detected a small blip—something really new may just be out there. Our old familiar solar system, though, has turned out to be full of surprises. Unmanned spacecraft have discovered that the planets surrounding us are more puzzling, peculiar and dynamic than we would ever have thought. Mars once had water. Pluto, which was supposed to be an inert lump, like the moon, turns out to be a dynamic planet full of glaciers of nitrogen. On our own planet, the big, disturbing news is that the effects of carbon on climate change are ever more evident and immediate. The ice sheets are melting, sea levels are rising, and last year was almost certainly the warmest on record. Our human response is achingly slow in contrast. When it comes to all the living things that inhabit that planet, the big news is the new Crispr gene-editing technology. The technique means that we can begin to rewrite the basic genetic code of all living beings—from mosquitoes to men. The news about our particular human bodies and their ills is especially interesting. The idea that tiny invisible organisms make us sick was one of the great triumphs of the scientific expansion of scale. But new machines that detect the genetic signature of bacteria have shown that those invisible germs—the “microbiome”—aren’t really the enemy. In fact, they’re essential to keeping us well, and the great lifesaving advance of antibiotics comes with a cost. The much more mysterious action of our immune system is really the key to human health, and that system appears to play a key role in everything from allergies to obesity to cancer. If new technology is helping us to understand and mend the human body, it is also expanding the scope of the human mind. We’ve seen lots of media coverage about artificial intelligence over the past year, but the basic algorithms are not really new. The news is the sheer amount of data and computational power that is available. Still, even if those advances are just about increases in data and computing power, they could profoundly change how we interact with the world. In my own contribution to answering the Edge question, I talked about the fact that toddlers are starting to interact with computers and that the next generation will learn about computers in a radically new way. From the Large Hadron Collider to the Mars Rover, from Crispr to the toddler’s iPad, the news is that technologies let us master the universe and ourselves and reshape the planet. What we still don’t know is whether, ultimately, these developments are good news or bad. WHO WAS THAT GHOST? SCIENCE'S REASSURING REPLY It’s midnight on Halloween. You walk through a deserted graveyard as autumn leaves swirl around your feet. Suddenly, inexplicably and yet with absolute certainty, you feel an invisible presence by your side. Could it be a ghost? A demon? Or is it just an asynchrony in somato-sensory motor integration in the frontoparietal cortex? A 2014 paper in the journal Current Biology by Olaf Blanke at the University Hospital of Geneva and his colleagues supports the last explanation. For millennia people have reported vividly experiencing an invisible person nearby. The researchers call it a “feeling of presence.” It can happen to any of us: A Pew research poll found that 18% of Americans say they have experienced a ghost. But patients with particular kinds of brain damage are especially likely to have this experience. The researchers found that specific areas of these patients’ frontoparietal cortex were damaged—the same brain areas that let us sense our own bodies. Those results suggested that the mysterious feeling of another presence might be connected to the equally mysterious feeling of our own presence—that absolute certainty that there is an “I” living inside my body. The researchers decided to try to create experimentally the feeling of presence. Plenty of people without evident brain damage say they have felt a ghost was present. Could the researchers systematically make ordinary people experience a disembodied spirit? They tested 50 ordinary, healthy volunteers. In the experiment, you stand between two robots and touch the robot in front of you with a stick. That “master” robot sends signals that control the second “slave” robot behind you. The slave robot reproduces your movements and uses them to control another stick that strokes your back. So you are stroking something in front of you, but you feel those same movements on your own back. The result is a very strong sense that somehow you are touching your own back, even though you know that’s physically impossible. The researchers have manipulated your sense of where your self begins and ends. Then the researchers changed the set-up just slightly. Now the slave robot touches your back half a second after you touch the master robot, so there is a brief delay between what you do and what you feel. Now people in the experiment report a “feeling of presence”: They say that somehow there is an invisible ghostly person in the room, even though that is also physically impossible. If we put that result together with the brain-damage studies, it suggests an intriguing possibility. When we experience ghosts and spirits, angels and demons, we are really experiencing a version of ourselves. Our brains construct a picture of the “I” peering out of our bodies, and if something goes slightly wrong in that process—because of brain damage, a temporary glitch in normal brain processing or the wiles of an experimenter—we will experience a ghostly presence instead. So, in the great “Scooby-Doo” tradition, we’ve cleared up the mystery, right? The ghost turned out just to be you in disguise? Not quite. All good ghost stories have a twist, what Henry James called “The Turn of the Screw.” The ghost in the graveyard was just a creation of your brain. But the “you” who met the ghost was also just the creation of your brain. In fact, the same brain areas that made you feel someone else was there are the ones that made you feel that you were there too. If you’re a good, hard-headed scientist, it’s easy to accept that the ghost was just a Halloween illusion, fading into the mist and leaves. But what about you, that ineffable, invisible self who inhabits your body and peers out of your eyes? Are you just a frontoparietal ghost too? Now that’s a really scary thought. IS OUR IDENTITY IN INTELLECT, MEMORY OR MORAL CHARACTER? This summer my 93-year-old mother-in-law died, a few months after her 94–year-old husband. For the last five years she had suffered from Alzheimer’s disease. By the end she had forgotten almost everything, even her children’s names, and had lost much of what defined her—her lively intelligence, her passion for literature and history. Still, what remained was her goodness, a characteristic warmth and sweetness that seemed to shine even more brightly as she grew older. Alzheimer’s can make you feel that you’ve lost the person you loved, even though they’re still alive. But for her children, that continued sweetness meant that, even though her memory and intellect had gone, she was still Edith. A new paper in Psychological Science reports an interesting collaboration between the psychologist Nina Strohminger at Yale University and the philosopher Shaun Nichols at the University of Arizona. Their research suggests that Edith was an example of a more general and rather surprising principle: Our identity comes more from our moral character than from our memory or intellect. Neurodegenerative diseases like Alzheimer’s make especially vivid a profound question about human nature. In the tangle of neural connections that make up my brain, where am I? Where was Edith? When those connections begin to unravel, what happens to the person? Many philosophers have argued that our identity is rooted in our continuous memories or in our accumulated knowledge. Drs. Strohminger and Nichols argue instead that we identify people by their moral characteristics, their gentleness or kindness or courage—if those continue, so does the person. To test this idea the researchers compared different kinds of neurodegenerative diseases in a 248-participant study. They compared Alzheimer’s patients to patients who suffer from fronto-temporal dementia, or FTD. FTD is the second most common type of dementia after Alzheimer’s, though it affects far fewer people and usually targets a younger age group. Rather than attacking the memory areas of the brain, it damages the frontal control areas. These areas are involved in impulse control and empathy—abilities that play a particularly important role in our moral lives. As a result, patients may change morally even though they retain memory and intellect. They can become indifferent to other people or be unable to control the impulse to be rude. They may even begin to lie or steal. Finally, the researchers compared both groups to patients with amyotrophic lateral sclerosis, or ALS, who gradually lose motor control but not other capacities. (Physicist Stephen Hawking suffers from ALS.) The researchers asked spouses or children caring for people with these diseases to fill out a questionnaire about how the patients had changed, including changes in memory, cognition and moral behavior. They also asked questions like, “How much do you sense that the patient is still the same person underneath?” or, “Do you feel like you still know who the patient is?” The researchers found that the people who cared for the FTD patients were much more likely to feel that they had become different people than the caregivers of the Alzheimer’s patients. The ALS caregivers were least likely to feel that the patient had become a different person. What’s more, a sophisticated statistical analysis showed that this was the effect of changes in the patient’s moral behavior in particular. Across all three groups, changes in moral behavior predicted changes in perceived identity, while changes in memory or intellect did not. These results suggest something profound. Our moral character, after all, is what links us to other people. It’s the part of us that goes beyond our own tangle of neurons to touch the brains and lives of others. Because that moral character is central to who we are, there is a sense in which Edith literally, and not just metaphorically, lives on in the people who loved her.
AGGRESSION IN CHILDREN MAKES SENSE - SOMETIMES Walk into any preschool classroom and you’ll see that some 4-year-olds are always getting into fights—while others seldom do, no matter the provocation. Even siblings can differ dramatically—remember Cain and Abel. Is it nature or nurture that causes these deep differences in aggression? The new techniques of genomics—mapping an organism’s DNA and analyzing how it works—initially led people to think that we might find a gene for undesirable individual traits like aggression. But from an evolutionary point of view, the very idea that a gene can explain traits that vary so dramatically is paradoxical: If aggression is advantageous, why didn’t the gene for aggression spread more widely? If it’s harmful, why would the gene have survived at all? Two new studies suggest that the relationship between genes and aggression is more complicated than a mere question of nature vs. nurture. And those complications may help to resolve the evolutionary paradox. In earlier studies, researchers looked at variation in a gene involved in making brain chemicals. Children with a version of the gene called VAL were more likely to become aggressive than those with a variation called MET. But this only happened if the VAL children also experienced stressful events like abuse, violence or illness. So it seemed that the VAL version of the gene made the children more vulnerable to stress, while the MET version made them more resilient. A study published last month in the journal Developmental Psychology, by Beate Hygen and colleagues from the Norway University of Science and Technology and Jay Belsky of U.C. Davis, found that the story was even more complicated. They analyzed the genes of hundreds of Norwegian 4-year-olds. They also got teachers to rate how aggressive the children were and parents to record whether the children had experienced stressful life events. As in the earlier studies, the researchers found that children with the VAL variant were more aggressive when they were subjected to stress. But they also found something else: When not subjected to stress, these children were actually less aggressive than the MET children. Dr. Belsky has previously used the metaphor of orchids and dandelions to describe types of children. Children with the VAL gene seem to be more sensitive to the environment, for good and bad, like orchids that can be magnificent in some environments but wither in others. The MET children are more like dandelions, coming out somewhere in the middle no matter the conditions. Dr. Belsky has suggested that this explanation for individual variability can help to resolve the evolutionary puzzle. Including both orchids and dandelions in a group of children gives the human race a way to hedge its evolutionary bets. A study published online in May in the journal Developmental Science, by Dr. Belsky with Willem Frankenhuis and Karthik Panchanathan, used mathematical modeling to explore this idea more precisely. If a species lives in a predictable, stable environment, then it would be adaptive for their behavior to fit that environment as closely as possible. But suppose you live in an environment that changes unpredictably. In that case, you might want to diversify your genetic portfolio. Investing in dandelions is like putting your money in bonds: It’s safe and reliable and will give you a constant, if small, return in many conditions. Investing in orchids is higher risk, but it also promises higher returns. If conditions change, then the orchids will be able to change with them. Being mean might sometimes pay off, but only when times are tough. Cooperation will be more valuable when resources are plentiful. The risk is that the orchids may get it wrong—a few stressful early experiences might make a child act as if the world is hard, even when it isn’t. In fact, the model showed that when environments change substantially over time, a mix of orchids and dandelions is the most effective strategy. We human beings perpetually redesign our living space and social circumstances. By its very nature, our environment is unpredictable. That may be why every preschool class has its mix of the sensitive and the stolid. BRAINS, SCHOOLS AND A VICIOUS CYCLE OF POVERTY A fifth or more of American children grow up in poverty, with the situation worsening since 2000, according to census data. At the same time, as education researcher Sean Reardon has pointed out, an “income achievement gap” is widening: Low-income children do much worse in school than higher-income children. Since education plays an ever bigger role in how much we earn, a cycle of poverty is trapping more American children. It’s hard to think of a more important project than understanding how this cycle works and trying to end it. Neuroscience can contribute to this project. In a new study in Psychological Science, John Gabrieli at the Massachusetts Institute of Technology and his colleagues used imaging techniques to measure the brains of 58 14-year-old public school students. Twenty-three of the children qualified for free or reduced-price lunch; the other 35 were middle-class. The scientists found consistent brain differences between the two groups. The researchers measured the thickness of the cortex—the brain’s outer layer—in different brain areas. The low-income children had developed thinner cortices than the high-income children. The low-income group had more ethnic and racial minorities, but statistical analyses showed that ethnicity and race were not associated with brain thickness, although income was. Children with thinner cortices also tended to do worse on standardized tests than those with thicker ones. This was true for high-income as well as low-income children. Of course, just finding brain differences doesn’t tell us much. By definition, something about the brains of the children must be different, since their behavior on the tests varies so much. But finding this particular brain difference at least suggests some answers. The brain is the most complex system on the planet, and brain development involves an equally complex web of interactions between genes and the physical, social and intellectual environment. We still have much to learn. But we do know that the brain is, as neuroscientists say, plastic. The process of evolution has designed brains to be shaped by the outside world. That’s the whole point of having one. Two complementary processes play an especially important role in this shaping. In one process, what neuroscientists call “proliferation,” the brain makes many new connections between neurons. In the other process, “pruning,” some existing connections get stronger, while others disappear. Experience heavily influences both proliferation and pruning. Early in development, proliferation prevails. Young children make many more new connections than adults do. Later in development, pruning grows in importance. Humans shift from a young brain that is flexible and good at learning, to an older brain that is more effective and efficient, but more rigid. A change in the thickness of the cortex seems to reflect this developmental shift. While in childhood the cortex gradually thickens, in adolescence this process is reversed and the cortex gets thinner, probably because of pruning. We don’t know whether the low-income 14-year-olds in this study failed to grow thicker brains as children, or whether they shifted to thinner brains more quickly in adolescence. There are also many differences in the experiences of low-income and high-income children, aside from income itself—differences in nutrition, stress, learning opportunities, family structure and many more. We don’t know which of these differences led to the differences in cortical thickness. But we can find some hints from animal studies. Rats raised in enriched environments, with lots of things to explore and opportunities to learn, develop more neural connections. Rats subjected to stress develop fewer connections. Some evidence exists that stress also makes animals grow up too quickly, even physically, with generally bad effects. And nutrition influences brain development in all animals. The important point, and the good news, is that brain plasticity never ends. Brains can be changed throughout life, and we never entirely lose the ability to learn and change. But, equally importantly, childhood is the time of the greatest opportunity, and the greatest risk. We lose the potential of millions of young American brains every day. HOW 1-YEAR-OLDS FIGURE OUT THE WORLD Watch a 1-year-old baby carefully for a while, and count how many experiments you see. When Georgiana, my 17-month-old granddaughter, came to visit last weekend, she spent a good 15 minutes exploring the Easter decorations—highly puzzling, even paradoxical, speckled Styrofoam eggs. Are they like chocolate eggs or hard-boiled eggs? Do they bounce? Will they roll? Can you eat them? Some of my colleagues and I have argued for 20 years that even the youngest children learn about the world in much the way that scientists do. They make up theories, analyze statistics, try to explain unexpected events and even do experiments. When I write for scholarly journals about this “theory theory,” I talk about it very abstractly, in terms of ideas from philosophy, computer science and evolutionary biology. But the truth is that, at least for me, personally, watching Georgie is as convincing as any experiment or argument. I turn to her granddad and exclaim “Did you see that? It’s amazing! She’s destined to be an engineer!” with as much pride and astonishment as any nonscientist grandma. (And I find myself adding, “Can you imagine how cool it would be if your job was to figure out what was going on in that little head?” Of course, that is supposed to be my job—but like everyone else in the information economy, it often feels like all I ever actually do is answer e-mail.) Still, the plural of anecdote is not data, and fond grandma observations aren’t science. And while guessing what babies think is easy and fun, proving it is really hard and takes ingenious experimental techniques. In an amazingly clever new paper in the journal Science, Aimee Stahl and Lisa Feigenson at Johns Hopkins University show systematically that 11-month-old babies, like scientists, pay special attention when their predictions are violated, learn especially well as a result, and even do experiments to figure out just what happened. They took off from some classic research showing that babies will look at something longer when it is unexpected. The babies in the new study either saw impossible events, like the apparent passage of a ball through a solid brick wall, or straightforward events, like the same ball simply moving through an empty space. Then they heard the ball make a squeaky noise. The babies were more likely to learn that the ball made the noise when the ball had passed through the wall than when it had behaved predictably. In a second experiment, some babies again saw the mysterious dissolving ball or the straightforward solid one. Other babies saw the ball either rolling along a ledge or rolling off the end of the ledge and apparently remaining suspended in thin air. Then the experimenters simply gave the babies the balls to play with. The babies explored objects more when they behaved unexpectedly. They also explored them differently depending on just how they behaved unexpectedly. If the ball had vanished through the wall, the babies banged the ball against a surface; if it had hovered in thin air, they dropped it. It was as if they were testing to see if the ball really was solid, or really did defy gravity, much like Georgie testing the fake eggs in the Easter basket. In fact, these experiments suggest that babies may be even better scientists than grown-ups often are. Adults suffer from “confirmation bias”—we pay attention to the events that fit what we already know and ignore things that might shake up our preconceptions. Charles Darwin famously kept a special list of all the facts that were at odds with his theory, because he knew he’d otherwise be tempted to ignore or forget them. Babies, on the other hand, seem to have a positive hunger for the unexpected. Like the ideal scientists proposed by the philosopher of science Karl Popper, babies are always on the lookout for a fact that falsifies their theories. If you want to learn the mysteries of the universe, that great, distinctively human project, keep your eye on those weird eggs. HOW WE LEARN TO BE AFRAID OF THE RIGHT THINGS We learn to be afraid. One of the oldest discoveries in psychology is that rats will quickly learn to avoid a sound or a smell that has been associated with a shock in the past—they not only fear the shock, they become scared of the smell, too. A paper by Nim Tottenham of the University of California, Los Angeles in “Current Topics in Behavioral Neurosciences” summarizes recent research on how this learned fear system develops, in animals and in people. Early experiences help shape the fear system. If caregivers protect us from danger early in life, this helps us to develop a more flexible and functional fear system later. Dr. Tottenham argues, in particular, that caring parents keep young animals from prematurely developing the adult system: They let rat pups be pups and children be children. Of course, it makes sense to quickly learn to avoid events that have led to danger in the past. But it can also be paralyzing. There is a basic paradox about learning fear. Because we avoid the things we fear, we can’t learn anything more about them. We can’t learn that the smell no longer leads to a shock unless we take the risk of exploring the dangerous world. Many mental illnesses, from general anxiety to phobias to posttraumatic-stress syndrome, seem to have their roots in the way we learn to be afraid. We can learn to be afraid so easily and so rigidly that even things that we know aren’t dangerous—the benign spider, the car backfire that sounds like a gunshot—can leave us terrified. Anxious people end up avoiding all the things that just might be scary, and that leads to an increasingly narrow and restricted life and just makes the fear worse. The best treatment is to let people “unlearn” their fears—gradually exposing them to the scary cause and showing them that it doesn’t actually lead to the dangerous effect. Neuroscientists have explored the biological basis for this learned fear. It involves the coordination between two brain areas. One is the amygdala, an area buried deep in the brain that helps produce the basic emotion of fear, the trembling and heart-pounding. The other is the prefrontal cortex, which is involved in learning, control and planning. Regina Sullivan and her colleagues at New York University have looked at how rats develop these fear systems. Young rats don’t learn to be fearful the way that older rats do, and their amygdala and prefrontal systems take a while to develop and coordinate. The baby rats “unlearn” fear more easily than the adults, and they may even approach and explore the smell that led to the shock, rather than avoid it. If the baby rats are periodically separated from their mothers, however, they develop the adult mode of fear and the brain systems that go with it more quickly. This early maturity comes at a cost. Baby rats who are separated from their mothers have more difficulties later on, difficulties that parallel human mental illness. Dr. Tottenham and her colleagues found a similar pattern in human children. They looked at children who had grown up in orphanages in their first few years of life but then were adopted by caring parents. When they looked at the children’s brains with functional magnetic resonance imaging, they found that, like the rats, these children seemed to develop adultlike “fear circuits” more quickly. Their parents were also more likely to report that the children were anxious. The longer the children had stayed in the orphanages, the more their fear system developed abnormally, and the more anxious they were. The research fits with a broader evolutionary picture. Why does childhood exist at all? Why do people, and rats, put so much effort into protecting helpless babies? The people who care for children give them a protected space to figure out just how to cope with the dangerous adult world. Care gives us courage; love lets us learn. THE SMARTEST QUESTIONS TO ASK ABOUT INTELLIGENCE Scientists have largely given up the idea of “innate talent,” as I said in my last column. This change might seem implausible and startling. We all know that some people are better than others at doing some things. And we all know that genes play a big role in shaping our brains. So why shouldn’t genes determine those differences? Biologists talk about the relationship between a “genotype,” the information in your DNA, and a “phenotype,” the characteristics of an adult organism. These relationships turn out to be so complicated that parceling them out into percentages of nature and nurture is impossible. And, most significantly, these complicated relationships can change as environments change. For example, Michael Meaney at McGill University has discovered “epigenetic” effects that allow nurture to reshape nature. Caregiving can turn genes on and off and rewire brain areas. In a 2000 study published in Nature Neuroscience he and colleagues found that some rats were consistently better at solving mazes than others. Was this because of innate maze-solving genes? These smart rats, it turned out, also had more attentive mothers. The researchers then “cross-fostered” the rat pups: They took the babies of inattentive mothers, who would usually not be so good at maze-solving, and gave them to the attentive mothers to raise, and vice versa. If the baby rats’ talent was innate, this should make no difference. If it wasn’t, it should make all the difference. In fact, the inattentive moms’ babies who were raised by the attentive moms got smart, but the opposite pattern didn’t hold. The attentive moms’ babies stayed relatively smart even when they were raised by the inattentive moms. So genetics prevailed in the poor environment, but environment prevailed in the rich one. So was maze-solving innate or not? It turns out that it’s not the right question. To study human genetics, researchers can compare identical and fraternal twins. Early twin studies found that IQ was “heritable”—identical twins were more similar than fraternal ones. But these studies looked at well-off children. Eric Turkheimer at the University of Virginialooked at twins in poor families and found that IQ was much less “heritable.” In the poor environment, small differences in opportunity swamped any genetic differences. When everyone had the same opportunities, the genetic differences had more effect. So is IQ innate or not? Again, the wrong question. If you only studied rats this might be just academic. After all, rats usually are raised by their biological mothers. But the most important innate feature of human beings is our ability to transform our physical and social environments. Alone among animals, we can envision an unprecedented environment that might help us thrive, and make that environment a reality. That means we simply don’t know what the relationship between genes and environment will look like in the future. Take IQ again. James Flynn, at New Zealand’s University of Otago, and others have shown that absolute IQ scores have been steadily and dramatically increasing, by as much as three points a decade. (The test designers have to keep making the questions harder to keep the average at 100). The best explanation is that we have consciously transformed our society into a world where schools are ubiquitous. So even though genes contribute to whatever IQ scores measure, IQ can change radically as a result of changes in environment. Abstract thinking and a thirst for knowledge might once have been a genetic quirk. In a world of schools, they become the human inheritance. Thinking in terms of “innate talent” often leads to a kind of fatalism: Because right now fewer girls than boys do well at math, the assumption is that this will always be the case. But the actual science of genes and environment says just the opposite. If we want more talented children, we can change the world to create them. WHAT A CHILD CAN TEACH A SMART COMPUTER Every January the intellectual impresario and literary agent John Brockman (who represents me, I should disclose) asks a large group of thinkers a single question on his website, edge.org. This year it is: “What do you think about machines that think?” There are lots of interesting answers, ranging from the skeptical to the apocalyptic. I’m not sure that asking whether machines can think is the right question, though. As someone once said, it’s like asking whether submarines can swim. But we can ask whether machines can learn, and especially, whether they can learn as well as 3-year-olds. Everyone knows that Alan Turing helped to invent the very idea of computation. Almost no one remembers that he also thought that the key to intelligence would be to design a machine that was like a child, not an adult. He pointed out, presciently, that the real secret to human intelligence is our ability to learn. The history of artificial intelligence is fascinating because it has been so hard to predict what would be easy or hard for a computer. At first, we thought that things like playing chess or proving theorems—the bullfights of nerd machismo—would be hardest. But they turn out to be much easier than recognizing a picture of a cat or picking up a cup. And it’s actually easier to simulate a grandmaster’s gambit than to mimic the ordinary learning of every baby. Recently, machine learning has helped computers to do things that were impossible before, like labeling Internet images accurately. Techniques like “deep learning” work by detecting complicated and subtle statistical patterns in a set of data. But this success isn’t due to the fact that computers have suddenly developed new powers. The big advance is that, thanks to the Internet, they can apply these statistical techniques to enormous amounts of data—data that were predigested by human brains. Computers can recognize Internet images only because millions of real people have sorted out the unbelievably complex information received by their retinas and labeled the images they post online—like, say, Instagrams of their cute kitty. The dystopian nightmare of “The Matrix” is now a simple fact: We’re all serving Google ’s computers, under the anesthetizing illusion that we’re just having fun with LOLcats. The trouble with this sort of purely statistical machine learning is that you can only generalize from it in a limited way, whether you’re a baby or a computer or a scientist. A more powerful way to learn is to formulate hypotheses about what the world is like and to test them against the data. One of the other big advances in machine learning has been to automate this kind of hypothesis-testing. Machines have become able to formulate hypotheses and test them against data extremely well, with consequences for everything from medical diagnoses to meteorology. The really hard problem is deciding which hypotheses, out of all the infinite possibilities, are worth testing. Preschoolers are remarkably good at creating brand new, out-of-the-box creative concepts and hypotheses in a way that computers can’t even begin to match. Preschoolers are also remarkably good at creating chaos and mess, as all parents know, and that may actually play a role in their creativity. Turing presciently argued that it might be good if his child computer acted randomly, at least some of the time. The thought processes of three-year-olds often seem random, even crazy. But children have an uncanny ability to zero in on the right sort of weird hypothesis—in fact, they can be substantially better at this than grown-ups. We have almost no idea how this sort of constrained creativity is possible. There are, indeed, amazing thinking machines out there, and they will unquestionably far surpass our puny minds and eventually take over the world. We call them our children. 2014 HOW CHILDREN GET THE CHRISTMAS SPIRIT As we wade through the towers of presents and the mountains of torn wrapping paper, and watch the children’s shining, joyful faces and occasional meltdowns, we may find ourselves speculating—in a detached, philosophical way—about generosity and greed. That’s how I cope, anyway. Are we born generous and then learn to be greedy? Or is it the other way round? Do immediate intuitive impulses or considered reflective thought lead to generosity? And how could we possibly tell? Recent psychological research has weighed in on the intuitive-impulses side. People seem to respond quickly and perhaps even innately to the good and bad behavior of others. Researchers like Kiley Hamlin at the University of British Columbia have shown that even babies prefer helpful people to harmful ones. And psychologists like Jonathan Haidt at New York University’s Stern School of Business have argued that even adult moral judgments are based on our immediate emotional reactions—reflection just provides the after-the-fact rationalizations. But some new studies suggest it’s more complicated. Jason Cowell and Jean Decety at the University of Chicago explored this question in the journal Current Biology. They used electroencephalography, or EEG, to monitor electrical activity in children’s brains. Their study had two parts. In the first part, the researchers recorded the brain waves of 3-to-5-year-olds as they watched cartoons of one character either helping or hurting another. The children’s brains reacted differently to the good and bad scenarios. But they did so in two different ways. One brain response, the EPN, was quick, another, the LPP, was in more frontal parts of the brain and was slower. In adults, the EPN is related to automatic, instinctive reactions while the LPP is connected to more purposeful, controlled and reflective thought. In the second part of the study, the experimenters gave the children a pile of 10 stickers and told them they could keep them all themselves or could give some of them to an anonymous child who would visit the lab later in the day. Some children were more generous than others. Then the researchers checked to see which patterns of brain activity predicted the children’s generosity. They found that the EPN—the quick, automatic, intuitive reaction—didn’t predict how generous the children were later on. But the slow, thoughtful LPP brain wave did. Children who showed more of the thoughtful brain activity when they saw the morally relevant cartoons also were more likely to share later on. Of course, brain patterns are complicated and hard to interpret. But this study at least suggests an interesting possibility. There are indeed quick and automatic responses to help and to harm, and those responses may play a role in our moral emotions. But more reflective, complex and thoughtful responses may play an even more important role in our actions, especially actions like deciding to share with a stranger. Perhaps this perspective can help to resolve some of the Christmas-time contradictions, too. We might wish that the Christmas spirit would descend on us and our children as simply and swiftly as the falling snow. But perhaps it’s the very complexity of the season, that very human tangle of wanting and giving, joy and elegy, warmth and tension, that makes Christmas so powerful, and that leads even children to reflection, however gently. Scrooge tells us about both greed and generosity, Santa’s lists reflect both justice and mercy, the Magi and the manger represent both abundance and poverty. And, somehow, at least in memory, Christmas generosity always outweighs the greed, the joys outlive the disappointments. Even an unbeliever like me who still deeply loves Christmas can join in the spirit of Scrooge’s nephew Fred, “Though it has never put a scrap of gold or silver in my pocket [or, I would add, an entirely uncomplicated intuition of happiness in my brain], I believe that Christmas has done me good, and will do me good, and, I say, God bless it!” HOW HUMANS LEARN TO COMMUNICATE WITH THEIR EYES The eyes are windows to the soul. What could be more obvious? I look through my eyes onto the world, and I look through the eyes of others into their minds. We immediately see the tenderness and passion in a loving gaze, the fear and malice in a hostile glance. In a lecture room, with hundreds of students, I can pick out exactly who is, and isn’t, paying attention. And, of course, there is the electricity of meeting a stranger’s glance across a crowded room. But wait a minute, eyes aren’t windows at all. They’re inch-long white and black and colored balls of jelly set in holes at the top of a skull. How could those glistening little marbles possibly tell me about love or fear or attention? A new study in the Proceedings of the National Academy of Science by Sarah Jessen of the Max Planck Institute and Tobias Grossmann of the University of Virginia, suggests that our understanding of eyes runs very deep and emerges very early. Human eyes have much larger white areas than the eyes of other animals and so are easier to track. When most people, including tiny babies, look at a face, they concentrate on the eyes. People with autism, who have trouble understanding other minds, often don’t pay attention to eyes in the same way, and they have trouble meeting or following another person’s gaze. All this suggests that we may be especially adapted to figure out what our fellow humans see and feel from their eyes. If that’s true, even very young babies might detect emotions from eyes, and especially eye whites. The researchers showed 7-month-old babies schematic pictures of eyes. The eyes could be fearful or neutral; the clue to the emotion was the relative position of the eye-whites. (Look in the mirror and raise your eyelids until the white area on top of the iris is visible—then register the look of startled fear on your doppelgänger in the reflection.) The fearful eyes could look directly at the baby or look off to one side. As a comparison, the researchers also gave the babies exactly the same images to look at but with the colors reversed, so that the whites were black. They showed the babies the images for only 50 milliseconds, too briefly even to see them consciously. They used a technique called Event-Related Brain Potentials, or ERP, to analyze the babies’ brain-waves. The babies’ brain-waves were different when they looked at the fearful eyes and the neutral ones, and when they saw the eyes look right at them or off to one side. The differences were particularly clear in the frontal parts of the brain. Those brain areas control attention and are connected to the brain areas that detect fear. When the researchers showed the babies the reversed images, their brains didn’t differentiate between them. So they weren’t just responding to the visual complexity of the images—they seemed to recognize that there was something special about the eye-whites. So perhaps the eyes are windows to the soul. After all, I think that I just look out and directly see the table in front of me. But, in fact, my brain is making incredibly complex calculations that accurately reconstruct the shape of the table from the patterns of light that enter my eyeballs. My baby granddaughter Georgiana’s brain, nestled in the downy head on my lap, does the same thing. The new research suggests that my brain also makes my eyes move in subtle ways that send out complex signals about what I feel and see. And, as she gazes up at my face, Georgie’s brain interprets those signals and reconstructs the feelings that caused them. She really does see the soul behind my eyes, as clearly as she sees the table in front of them. WHAT SENDS TEENS TOWARD TRIUMPH OR TRIBULATION Laurence Steinberg calls his authoritative new book on the teenage mind “Age of Opportunity.” Most parents think of adolescence, instead, as an age of crisis. In fact, the same distinctive teenage traits can lead to either triumph or disaster. On the crisis side, Dr. Steinberg outlines the grim statistics. Even though teenagers are close to the peak of strength and health, they are more likely to die in accidents, suicides and homicides than younger or older people. And teenagers are dangerous to others as well. Study after study shows that criminal and antisocial behavior rises precipitously in adolescence and then falls again. Why? What happens to transform a sane, sober, balanced 8-year-old into a whirlwind of destruction in just a few years? And why do even smart, thoughtful, good children get into trouble? It isn’t because teenagers are dumb or ignorant. Studies show that they understand risks and predict the future as well as adults do. Dr. Steinberg wryly describes a public service campaign that tried to deter unprotected sex by explaining that children born to teenage parents are less likely to go to college. The risk to a potential child’s educational future is not very likely to slow down two teenagers making out on the couch. Nor is it just that teenagers are impulsive; the ability for self-control steadily develops in the teen years, and adolescents are better at self-control than younger children. So why are they so much more likely to act destructively? Dr. Steinberg and other researchers suggest that the crucial change involves sensation-seeking. Teenagers are much more likely than either children or adults to seek out new experiences, rewards and excitements, especially social experiences. Some recent studies by Kathryn Harden at the University of Texas at Austin and her colleagues in the journal Developmental Science support this idea. They analyzed a very large study that asked thousands of adolescents the same questions over the years, as they grew up. Some questions measured impulsiveness (“I have to use a lot of self-control to stay out of trouble”), some sensation-seeking (“I enjoy new and exciting experiences even if they are a little frightening or unusual . . .”) and some delinquency (“I took something from a store without paying for it”). Impulsivity and sensation-seeking were not closely related to one another. Self-control steadily increased from childhood to adulthood, while sensation-seeking went up sharply and then began to decline. It was the speed and scope of the increase in sensation-seeking that predicted whether the teenagers would break the rules later on. But while teenage sensation-seeking can lead to trouble, it can also lead to some of the most important advances in human culture. Dr. Steinberg argues that adolescence is a time when the human brain becomes especially “plastic,” particularly good at learning, especially about the social world. Adolescence is a crucial period for human innovation and exploration. Sensation-seeking helped teenagers explore and conquer the literal jungles in our evolutionary past—and it could help them explore and conquer the metaphorical Internet jungles in our technological future. It can lead young people to explore not only new hairstyles and vocabulary, but also new kinds of politics, art, music and philosophy. So how can worried parents ensure that their children’s explorations come out well rather than badly? A very recent study by Dr. Harden’s group provides a bit of solace. The relationship between sensation-seeking and delinquency was moderated by two other factors: the teenager’s friends and the parents’ knowledge of the teen’s activities. When parents kept track of where their children were and whom they were with, sensation-seeking was much less likely to be destructive. Asking the old question, “Do you know where your children are?” may be the most important way to make sure that adolescent opportunities outweigh the crises. POVERTY'S VICIOUS CYCLE CAN AFFECT OUR GENES From the inside, nothing in the world feels more powerful than our impulse to care for helpless children. But new research shows that caring for children may actually be even more powerful than it feels. It may not just influence children's lives—it may even shape their genes. As you might expect, the genomic revolution has completely transformed the nature/nurture debate. What you might not expect is that it has shown that nurture is even more important than we thought. Our experiences, especially our early experiences, don't just interact with our genes, they actually make our genes work differently. This might seem like heresy. After all, one of the first things we learn in Biology 101 is that the genes we carry are determined the instant we are conceived. And that's true. But genes are important because they make cells, and the process that goes from gene to cell is remarkably complex. The genes in a cell can be expressed differently—they can be turned on or off, for example—and that makes the cells behave in completely different ways. That's how the same DNA can create neurons in your brain and bone cells in your femur. The exciting new field of epigenetics studies this process. One of the most important recent discoveries in biology is that this process of translating genes into cells can be profoundly influenced by the environment. In a groundbreaking 2004 Nature paper, Michael Meaney at McGill University and his colleagues looked at a gene in rats that helps regulate how an animal reacts to stress. A gene can be "methylated" or "demethylated"—a certain molecule does or doesn't attach to the gene. This changes the way that the gene influences the cell. In carefully controlled experiments Dr. Meaney discovered that early caregiving influenced how much the stress-regulating gene was methylated. Rats who got less nuzzling and licking from their mothers had more methylated genes. In turn, the rats with the methylated gene were more likely to react badly to stress later on. And these rats, in turn, were less likely to care for their own young, passing on the effect to the next generation. The scientists could carefully control every aspect of the rats' genes and environment. But could you show the same effect in human children, with their far more complicated brains and lives? A new study by Seth Pollak and colleagues at the University of Wisconsin at Madison in the journal Child Development does just that. They looked at adolescents from vulnerable backgrounds, and compared the genes of children who had been abused and neglected to those who had not. Sure enough, they found the same pattern of methylation in the human gene that is analogous to the rat stress-regulating gene. Maltreated children had more methylation than children who had been cared for. Earlier studies show that abused and neglected children are more sensitive to stress as adults, and so are more likely to develop problems like anxiety and depression, but we might not have suspected that the trouble went all the way down to their genes. The researchers also found a familiar relationship between the socio-economic status of the families and the likelihood of abuse and neglect: Poverty, stress and isolation lead to maltreatment. The new studies suggest a vicious multigenerational circle that affects a horrifyingly large number of children, making them more vulnerable to stress when they grow up and become parents themselves. Twenty percent of American children grow up in poverty, and this number has been rising, not falling. Nearly a million are maltreated. The new studies show that this damages children, and perhaps even their children's children, at the most fundamental biological level. EVEN CHILDREN GET MORE OUTRAGED AT 'THEM' THAN AT 'US' From Ferguson to Gaza, this has been a summer of outrage. But just how outraged people are often seems to depend on which group they belong to. Polls show that many more African-Americans think that Michael Brown's shooting by a Ferguson police officer was unjust than white Americans. How indignant you are about Hamas rockets or Israeli attacks that kill civilians often depends on whether you identify with the Israelis or the Palestinians. This is true even when people agree about the actual facts. You might think that such views are a matter of history and context, and that is surely partly true. But a new study in the Proceedings of the National Academy of Sciences suggests that they may reflect a deeper fact about human nature. Even young children are more indignant about injustice when it comes from "them" and is directed at "us." And that is true even when "them" and "us" are defined by nothing more than the color of your hat. Jillian Jordan, Kathleen McAuliffe and Felix Warneken at Harvard University looked at what economists and evolutionary biologists dryly call "costly third-person norm-violation punishment" and the rest of us call "righteous outrage." We take it for granted that someone who sees another person act unfairly will try to punish the bad guy, even at some cost to themselves. From a purely economic point of view, this is puzzling—after all, the outraged person is doing fine themselves. But enforcing fairness helps ensure social cooperation, and we humans are the most cooperative of primates. So does outrage develop naturally, or does it have to be taught? The experimenters gave some 6-year-old children a pile of Skittles candy. Then they told them that earlier on, another pair of children had played a Skittle-sharing game. For example, Johnny got six Skittles, and he could choose how many to give to Henry and how many to keep. Johnny had either divided the candies fairly or kept them all for himself. Now the children could choose between two options. If they pushed a lever to the green side, Johnny and Henry would keep their Skittles, and so would the child. If they pushed it to the red side, all six Skittles would be thrown away, and the children would lose a Skittle themselves as well. Johnny would be punished, but they would lose too. When Johnny was fair, the children pushed the lever to green. But when Johnny was selfish, the children acted as if they were outraged. They were much more likely to push the lever to red—even though that meant they would lose themselves. How would being part of a group influence these judgments? The experimenters let the children choose a team. The blue team wore blue hats, and the yellow team wore yellow. They also told the children whether Johnny and Henry each belonged to their team or the other one. The teams were totally arbitrary: There was no poisonous past, no history of conflict. Nevertheless, the children proved more likely to punish Johnny's unfairness if he came from the other team. They were also more likely to punish him if Henry, the victim, came from their own team. As soon as they showed that they were outraged at all, the children were more outraged by "them" than "us." This is a grim result, but it fits with other research. Children have impulses toward compassion and justice—the twin pillars of morality—much earlier than we would have thought. But from very early on, they tend to reserve compassion and justice for their own group. There was a ray of hope, though. Eight-year-olds turned out to be biased toward their own team but less biased than the younger children. They had already seemed to widen their circle of moral concern beyond people who wear the same hats. We can only hope that, eventually, the grown-up circle will expand to include us all. In a shifty world, surely the one thing we can rely on is the evidence of our own eyes. I may doubt everything else, but I have no doubts about what I see right now. Even if I'm stuck in The Matrix, even if the things I see aren't real—I still know that I see them. Or do I? A new paper in the journal Trends in Cognitive Sciences by the New York University philosopher Ned Block demonstrates just how hard it is to tell if we really know what we see. Right now it looks to me as if I see the entire garden in front of me, each of the potted succulents, all of the mossy bricks, every one of the fuchsia blossoms. But I can only pay attention to and remember a few things at a time. If I just saw the garden for an instant, I'd only remember the few plants I was paying attention to just then. How about all the things I'm not paying attention to? Do I actually see them, too? It may just feel as if I see the whole garden because I quickly shift my attention from the blossoms to the bricks and back. Every time I attend to a particular plant, I see it clearly. That might make me think that I was seeing it clearly all along, like somebody who thinks the refrigerator light is always on, because it always turns on when you open the door to look. This "refrigerator light" illusion might make me think I see more than I actually do. On the other hand, maybe I do see everything in the garden—it's just that I can't remember and report everything I see, only the things I pay attention to. But how can I tell if I saw something if I can't remember it? Prof. Block focuses on a classic experiment originally done in 1960 by George Sperling, a cognitive psychologist at the University of California, Irvine. (You can try the experiment yourself online.) Say you see a three-by-three grid of nine letters flash up for a split second. What letters were they? You will only be able to report a few of them. Now suppose the experimenter tells you that if you hear a high-pitched noise you should focus on the first row, and if you hear a low-pitched noise you should focus on the last row. This time, not surprisingly, you will accurately report all three letters in the cued row, though you can't report the letters in the other rows. But here's the trick. Now you only hear the noise after the grid has disappeared. You will still be very good at remembering the letters in the cued row. But think about it—you didn't know beforehand which row you should focus on. So you must have actually seen all the letters in all the rows, even though you could only access and report a few of them at a time. It seems as if we do see more than we can say. Or do we? Here's another possibility. We know that people can extract some information from images they can't actually see—in subliminal perception, for example. Perhaps you processed the letters unconsciously, but you didn't actually see them until you heard the cue. Or perhaps you just saw blurred fragments of the letters. Prof. Block describes many complex and subtle further experiments designed to distinguish these options, and he concludes that we do see more than we remember. But however the debate gets resolved, the real moral is the same. We don't actually know what we see at all! You can do the Sperling experiment hundreds of times and still not be sure whether you saw the letters. Philosophers sometimes argue that our conscious experience can't be doubted because it feels so immediate and certain. But scientists tell us that feeling is an illusion, too.
A TODDLER'S SOUFFLES AREN'T JUST CHILD'S PLAY Augie, my 2-year-old grandson, is working on his soufflés. This began by accident. Grandmom was trying to simultaneously look after a toddler and make dessert. But his delight in soufflé-making was so palpable that it has become a regular event. The bar, and the soufflé, rise higher on each visit—each time he does a bit more and I do a bit less. He graduated from pushing the Cuisinart button and weighing the chocolate, to actually cracking and separating the eggs. Last week, he gravely demonstrated how you fold in egg whites to his clueless grandfather. (There is some cultural inspiration from Augie's favorite Pixar hero, Remy the rodent chef in "Ratatouille," though this leads to rather disturbing discussions about rats in the kitchen.) It's startling to see just how enthusiastically and easily a 2-year-old can learn such a complex skill. And it's striking how different this kind of learning is from the kind children usually do in school. New studies in the journal Human Development by Barbara Rogoff at the University of California, Santa Cruz and colleagues suggest that this kind of learning may actually be more fundamental than academic learning, and it may also influence how helpful children are later on. Dr. Rogoff looked at children in indigenous Mayan communities in Latin America. She found that even toddlers do something she calls "learning by observing and pitching in." Like Augie with the soufflés, these children master useful, difficult skills, from making tortillas to using a machete, by watching the grown-ups around them intently and imitating the simpler parts of the process. Grown-ups gradually encourage them to do more—the pitching-in part. The product of this collaborative learning is a genuine contribution to the family and community: a delicious meal instead of a standardized test score. This kind of learning has some long-term consequences, Dr. Rogoff suggests. She and her colleagues also looked at children growing up in Mexico City who either came from an indigenous heritage, where this kind of observational learning is ubiquitous, or a more Europeanized tradition. When they were 8 the children from the indigenous traditions were much more helpful than the Europeanized children: They did more work around the house, more spontaneously, including caring for younger siblings. And children from an indigenous heritage had a fundamentally different attitude toward helping. They didn't need to be asked to help—instead they were proud of their ability to contribute. The Europeanized children and parents were more likely to negotiate over helping. Parents tried all kinds of different contracts and bargains, and different regimes of rewards and punishments. Mostly, as readers will recognize with a sigh, these had little effect. For these children, household chores were something that a grown-up made you do, not something you spontaneously contributed to the family. Dr. Rogoff argues that there is a connection between such early learning by pitching in and the motivation and ability of school-age children to help. In the indigenous-tradition families, the toddler's enthusiastic imitation eventually morphed into real help. In the more Europeanized families, the toddler's abilities were discounted rather than encouraged. The same kind of discounting happens in my middle-class American world. After all, when I make the soufflé without Augie's help there's a much speedier result and a lot less chocolate fresco on the walls. And it's true enough that in our culture, in the long run, learning to make a good soufflé or to help around the house, or to take care of a baby, may be less important to your success as an adult than more academic abilities. But by observing and pitching in, Augie may be learning something even more fundamental than how to turn eggs and chocolate into soufflé. He may be learning how to turn into a responsible grown-up himself.
RICE, WHEAT AND THE VALUES THEY SOW Could what we eat shape how we think? A new paper in the journal Science by Thomas Talhelm at the University of Virginia and colleagues suggests that agriculture may shape psychology. A bread culture may think differently than a rice-bowl society. Psychologists have long known that different cultures tend to think differently. In China and Japan, people think more communally, in terms of relationships. By contrast, people are more individualistic in what psychologist Joseph Henrich, in commenting on the new paper, calls "WEIRD cultures." WEIRD stands for Western, educated, industrialized, rich and democratic. Dr. Henrich's point is that cultures like these are actually a tiny minority of all human societies, both geographically and historically. But almost all psychologists study only these WEIRD folks. The differences show up in surprisingly varied ways. Suppose I were to ask you to draw a graph of your social network, with you and your friends represented as circles attached by lines. Americans make their own circle a quarter-inch larger than their friends' circles. In Japan, people make their own circle a bit smaller than the others. Or you can ask people how much they would reward the honesty of a friend or a stranger and how much they would punish their dishonesty. Most Easterners tend to say they would reward a friend more than a stranger and punish a friend less; Westerners treat friends and strangers more equally. These differences show up even in tests that have nothing to do with social relationships. You can give people a "Which of these things belongs together?" problem, like the old "Sesame Street" song. Say you see a picture of a dog, a rabbit and a carrot. Westerners tend to say the dog and the rabbit go together because they're both animals—they're in the same category. Easterners are more likely to say that the rabbit and the carrot go together—because rabbits eat carrots. None of these questions has a right answer, of course. So why have people in different parts of the world developed such different thinking styles? You might think that modern, industrial cultures would naturally develop more individualism than agricultural ones. But another possibility is that the kind of agriculture matters. Rice farming, in particular, demands a great deal of coordinated labor. To manage a rice paddy, a whole village has to cooperate and coordinate irrigation systems. By contrast, a single family can grow wheat. Dr. Talhelm and colleagues used an ingenious design to test these possibilities. They looked at rice-growing and wheat-growing regions within China. (The people in these areas had the same language, history and traditions; they just grew different crops.) Then they gave people the psychological tests I just described. The people in wheat-growing areas looked more like WEIRD Westerners, but the rice growers showed the more classically Eastern communal and relational patterns. Most of the people they tested didn't actually grow rice or wheat themselves, but the cultural traditions of rice or wheat seemed to influence their thinking. This agricultural difference predicted the psychological differences better than modernization did. Even industrialized parts of China with a rice-growing history showed the more communal thinking pattern. The researchers also looked at two measures of what people do outside the lab: divorces and patents for new inventions. Conflict-averse communal cultures tend to have fewer divorces than individualistic ones, but they also create fewer individual innovations. Once again, wheat-growing areas looked more "WEIRD" than rice-growing ones. In fact, Dr. Henrich suggests that rice-growing may have led to the psychological differences, which in turn may have sparked modernization. Aliens from outer space looking at the Earth in the year 1000 would never have bet that barbarian Northern Europe would become industrialized before civilized Asia. And they would surely never have guessed that eating sandwiches instead of stir-fry might make the difference.
GRANDMOTHERS: THE BEHIND-THE-SCENES KEY TO HUMAN CULTURE? Why do I exist? This isn't a philosophical cri de coeur; it's an evolutionary conundrum. At 58, I'm well past menopause, and yet I'll soldier on, with luck, for many years more. The conundrum is more vivid when you realize that human beings (and killer whales) are the only species where females outlive their fertility. Our closest primate relatives—chimpanzees, for example—usually die before their 50s, when they are still fertile. It turns out that my existence may actually be the key to human nature. This isn't a megalomaniacal boast but a new biological theory: the "grandmother hypothesis." Twenty years ago, the anthropologist Kristen Hawkes at the University of Utah went to study the Hadza, a forager group in Africa, thinking that she would uncover the origins of hunting. But then she noticed the many wiry old women who dug roots and cooked dinners and took care of babies (much like me, though my root-digging skills are restricted to dividing the irises). It turned out that these old women played an important role in providing nutrition for the group, as much as the strapping young hunters. What's more, those old women provided an absolutely crucial resource by taking care of their grandchildren. This isn't just a miracle of modern medicine. Our human life expectancy is much longer than it used to be—but that's because far fewer children die in infancy. Anthropologists have looked at life spans in hunter-gatherer and forager societies, which are like the societies we evolved in. If you make it past childhood, you have a good chance of making it into your 60s or 70s. There are many controversies about what happened in human evolution. But there's no debate that there were two dramatic changes in what biologists call our "life-history": Besides living much longer than our primate relatives, our babies depend on adults for much longer. Young chimps gather as much food as they eat by the time they are 7 or so. But even in forager societies, human children pull their weight only when they are teenagers. Why would our babies be helpless for so long? That long immaturity helps make us so smart: It gives us a long protected time to grow large brains and to use those brains to learn about the world we live in. Human beings can learn to adapt to an exceptionally wide variety of environments, and those skills of learning and culture develop in the early years of life. But that immaturity has a cost. It means that biological mothers can't keep babies going all by themselves: They need help. In forager societies grandmothers provide a substantial amount of child care as well as nutrition. Barry Hewlett at Washington State University and his colleagues found, much to their surprise, that grandmothers even shared breast-feeding with mothers. Some grandmoms just served as big pacifiers, but some, even after menopause, could "relactate," actually producing milk. (Though I think I'll stick to the high-tech, 21st-century version of helping to feed my 5-month-old granddaughter with electric pumps, freezers and bottles.) Dr. Hawkes's "grandmother hypothesis" proposes that grandmotherhood developed in tandem with our long childhood. In fact, she argues that the evolution of grandmothers was exactly what allowed our long childhood, and the learning and culture that go with it, to emerge. In mathematical models, you can see what happens if, at first, just a few women live past menopause and use that time to support their grandchildren (who, of course, share their genes). The "grandmother trait" can rapidly take hold and spread. And the more grandmothers contribute, the longer the period of immaturity can be. So on Mother's Day this Sunday, as we toast mothers over innumerable Bloody Marys and Eggs Benedicts across the country, we might add an additional toast for the gray-haired grandmoms behind the scenes.
SCIENTISTS STUDY WHY STORIES EXIST We human beings spend hours each day telling and hearing stories. We always have. We’ve passed heroic legends around hunting fires, kitchen tables and the web, and told sad tales of lost love on sailing ships, barstools and cell phones. We’ve been captivated by Oedipus and Citizen Kane and Tony Soprano. Why? Why not just communicate information through equations or lists of facts? Why is it that even when we tell the story of our own random, accidental lives we impose heroes and villains, crises and resolutions? You might think that academic English and literature departments, departments that are devoted to stories, would have tried to answer this question or would at least want to hear from scientists who had. But, for a long time, literary theory was dominated by zombie ideas that had died in the sciences. Marx and Freud haunted English departments long after they had disappeared from economics and psychology. Recently, though, that has started to change. Literary scholars are starting to pay attention to cognitive science and neuroscience. Admittedly, some of the first attempts were misguided and reductive – “evolutionary psychology” just-so stories or efforts to locate literature in a particular brain area. But the conversation between literature and science is becoming more and more sophisticated and interesting. At a fascinating workshop at Stanford last month called “The Science of Stories” scientists and scholars talked about why reading Harlequin romances may make you more empathetic, about how ten-year-olds create the fantastic fictional worlds called “paracosms”, and about the subtle psychological inferences in the great Chinese novel, the Story of the Stone. One of the most interesting and surprising results came from the neuroscientist Uri Hasson at Princeton. As techniques for analyzing brain-imaging data have gotten more sophisticated, neuroscientists have gone beyond simply mapping particular brain regions to particular psychological functions. Instead, they use complex mathematical analyses to look for patterns in the activity of the whole brain as it changes over time. Hasson and his colleagues have gone beyond even that. They measure the relationship between the pattern in one person’s brain and the pattern in another’s. They’ve been especially interested in how brains respond to stories, whether they’re watching a Clint Eastwood movie, listening to a Salinger short story, or just hearing someone’s personal “How We Met” drama. When different people watched the same vivid story as they lay in the scanner -- “The Good, the Bad and the Ugly”, for instance, -- their brain activity unfolded in a remarkably similar way. Sergio Leone really knew how to get into your head. In another experiment they recorded the pattern of one person’s brain activity as she told a vivid personal story. Then someone else listened to the story on tape and they recorded his brain activity. Again, there was a remarkable degree of correlation between the two brain patterns. The storyteller, like Leone, had literally gotten in to the listener’s brain and altered it in predictable ways. But more than that, she had made the listener’s brain match her own brain. The more tightly coupled the brains became, the more the listener said that he understood the story. This coupling effect disappeared if you scrambled the sentences in the story. There was something about the literary coherence of the tale that seemed to do the work. One of my own favorite fictions, Star Trek, often includes stories about high-tech telepathic mind-control. Some alien has special powers that allows them to shape another person’s brain activity to match their own, or that produces brains that are so tightly linked that you can barely distinguish them. Hasson’s results suggest that we lowly humans are actually as good at mind-melding as the Vulcans or the Borg. We just do it with stories.
WHY YOU'RE NOT AS CLEVER AS A 4-YEAR-OLD Are young children stunningly dumb or amazingly smart? We usually think that children are much worse at solving problems than we are. After all, they can’t make lunch or tie their shoes, let alone figure out long division or ace the SAT’s. But, on the other hand, every parent finds herself exclaiming “Where did THAT come from!” all day long. So we also have a sneaking suspicion that children might be a lot smarter than they seem. A new study from our lab that just appeared in the journal Cognition shows that four-year-olds may actually solve some problems better than grown-ups do. Chris Lucas, Tom Griffiths, Sophie Bridgers and I wanted to know how preschoolers learn about cause and effect. We used a machine that lights up when you put some combinations of blocks on it and not others. Your job is to figure out which blocks make it go. (Actually, we secretly activate the machine with a hidden pedal. but fortunately nobody ever guesses that). Try it yourself. Imagine that you, a clever grown-up, see me put a round block on the machine three times. Nothing happens. But when I put a square block on next to the round one the machine lights up. So the square one makes it go and the round one doesn’t, right? Well, not necessarily. That’s true if individual blocks light up the machine. That’s the obvious idea and the one that grown-ups always think of first. But the machine could also work in a more unusual way. It could be that it takes a combination of two blocks to make the machine go, the way that my annoying microwave will only go if you press both the “cook” button and the “start” button. Maybe the square and round blocks both contribute, but they have to go on together. Suppose I also show you that a triangular block does nothing and a rectangular one does nothing, but the machine lights up when you put them on together. That should tell you that the machine follows the unusual combination rule instead of the obvious individual block rule. Will that change how you think about the square and round blocks? We showed patterns like these to kids ages 4 and 5 as well as to Berkeley undergraduates. First we showed them the triangle/rectangle kind of pattern, which suggested that the machine might use the unusual combination rule. Then we showed them the ambiguous round/square kind of pattern. The kids got it. They figured out that the machine might work in this unusual way and so that you should put both blocks on together. But the best and brightest students acted as if the machine would always follow the common and obvious rule, even when we showed them that it might work differently. Does this go beyond blocks and machines? We think it might reflect a much more general difference between children and adults. Children might be especially good at thinking about unlikely possibilities. After all, grown-ups know a tremendous amount about how the world works. It makes sense that we mostly rely on what we already know. In fact, computer scientists talk about two different kinds of learning and problem solving – “exploit” versus “explore.” In “exploit” learning we try to quickly find the solution that is most likely to work right now. In “explore” learning we try out lots of possibilities, including unlikely ones, even if they may not have much immediate pay-off. To thrive in a complicated world you need both kinds of learning. A particularly effective strategy is to start off exploring and then narrow in to exploit. Childhood, especially our unusually long and helpless human childhood, may be evolution’s way of balancing exploration and exploitation. Grown-ups stick with the tried and true; 4-year-olds have the luxury of looking for the weird and wonderful.
THE PSYCHEDELIC ROAD TO OTHER CONSCIOUS STATES How do a few pounds of gray goo in our skulls create our conscious experience—the blue of the sky, the tweet of the birds? Few questions are so profound and important—or so hard. We are still very far from an answer. But we are learning more about what scientists call "the neural correlates of consciousness," the brain states that accompany particular kinds of conscious experience. Most of these studies look at the sort of conscious experiences that people have in standard FMRI brain-scan experiments or that academics like me have all day long: bored woolgathering and daydreaming punctuated by desperate bursts of focused thinking and problem-solving. We've learned quite a lot about the neural correlates of these kinds of consciousness. But some surprising new studies have looked for the correlates of more exotic kinds of consciousness. Psychedelic drugs such as LSD were designed to be used in scientific research and, potentially at least, as therapy for mental illness. But of course, those drugs long ago escaped from the lab into the streets. They disappeared from science as a result. Recently, though, scientific research on hallucinogens has been making a comeback. Robin Carhart-Harris at Imperial College London and his colleagues review their work on psychedelic neuroscience in a new paper in the journal Frontiers in Neuroscience. Like other neuroscientists, they put people in FMRI brain scanners. But these scientists gave psilocybin—the active ingredient in consciousness-altering "magic mushrooms"—to volunteers with experience with psychedelic drugs. Others got a placebo. The scientists measured both groups' brain activity. Normally, when we introspect, daydream or reflect, a group of brain areas called the "default mode network" is particularly active. These areas also seem to be connected to our sense of self. Another brain-area group is active when we consciously pay attention or work through a problem. In both rumination and attention, parts of the frontal cortex are particularly involved, and there is a lot of communication and coordination between those areas and other parts of the brain. Some philosophers and neuroscientists have argued that consciousness itself is the result of this kind of coordinated brain activity. They think consciousness is deeply connected to our sense of the self and our capacities for reflection and control, though we might have other fleeting or faint kinds of awareness. But what about psychedelic consciousness? Far from faint or fleeting, psychedelic experiences are more intense, vivid and expansive than everyday ones. So you might expect to see that the usual neural correlates of consciousness would be especially active when you take psilocybin. That's just what the scientists predicted. But consistently, over many experiments, they found the opposite. On psilocybin, the default mode network and frontal control systems were actually much less active than normal, and there was much less coordination between different brain areas. In fact, "shroom" consciousness looked neurologically like the inverse of introspective, reflective, attentive consciousness. The researchers also got people to report on the quality of their psychedelic experiences. The more intense the experiences were and particularly, the more that people reported that they had lost the sense of a boundary between themselves and the world, the more they showed the distinctive pattern of deactivation. Dr. Carhart-Harris and colleagues suggest the common theory that links consciousness and control is wrong. Instead, much of the brain activity accompanying workaday consciousness may be devoted to channeling, focusing and even shutting down experience and information, rather than creating them. The Carhart-Harris team points to other uncontrolled but vivid kinds of consciousness such as dreams, mystical experiences, early stages of psychosis and perhaps even infant consciousness as parallels to hallucinogenic drug experience. To paraphrase Hamlet, it turns out that there are more, and stranger, kinds of consciousness than are dreamt of in our philosophy.
THE SURPRISING PROBABILITY GURUS WEARING DIAPERS Two new studies in the journal Cognition describe how some brilliant decision makers expertly use probability for profit. But you won't meet these economic whizzes at the World Economic Forum in Switzerland this month. Unlike the "Davos men," these analysts require a constant supply of breasts, bottles, shiny toys and unconditional adoration (well, maybe not so unlike the Davos men). Although some of them make do with bananas. The quants in question are 10-month-old babies and assorted nonhuman primates. Ordinary grown-ups are terrible at explicit probabilistic and statistical reasoning. For example, how likely is it that there will be a massive flood in America this year? How about an earthquake leading to a massive flood in California? People illogically give the first event a lower likelihood than the second. But even babies and apes turn out to have remarkable implicit statistical abilities. Stephanie Denison at the University of Waterloo in Canada and Fei Xu at the University of California, Berkeley, showed babies two large transparent jars full of lollipop-shaped toys. Some of the toys had plain black tops while some were pink with stars, glitter and blinking lights. Of course, economic acumen doesn't necessarily imply good taste, and most of the babies preferred pink bling to basic black. The two jars had different proportions of black and pink toys. For example, one jar contained 12 pink and four black toys. The other jar had 12 pink toys too but also contained 36 black toys. The experimenter took out a toy from one jar, apparently at random, holding it by the "pop" so that the babies couldn't see what color it was. Then she put it in an opaque cup on the floor. She took a toy from the second jar in the same way and put it in another opaque cup. The babies crawled toward one cup or the other and got the toy. (Half the time she put the first cup in front of the first jar, half the time she switched them around.) What should you do in this situation if you really want pink lollipops? The first cup is more likely to have a pink pop inside than the second, the odds are 3-1 versus 1-3, even though both jars have exactly the same number of pink toys inside. It isn't a sure thing, but that is where you would place your bets. So did the babies. They consistently crawled to the cup that was more likely to have a pink payoff. In a second experiment, one jar had 16 pink and 4 black toys, while the other had 24 pink and 96 black ones. The second jar actually held more pink toys than the first one, but the cup was less likely to hold a pink toy. The babies still went for the rational choice. In the second study, Hannes Rackoczy at the University of Göttingen in Germany and his colleagues did a similar experiment with a group of gorillas, bonobos, chimps and orangutans. They used banana and carrot pieces, and the experimenter hid the food in one or the other hand, not a cup. But the scientists got the same results: The apes chose the hand that was more likely to hold a banana. So it seems that we're designed with a basic understanding of probability. The puzzle is this: Why are grown-ups often so stupid about probabilities when even babies and chimps can be so smart? This intuitive, unconscious statistical ability may be completely separate from our conscious reasoning. But other studies suggest that babies' unconscious understanding of numbers may actually underpin their ability to explicitly learn math later. We don't usually even try to teach probability until high school. Maybe we could exploit these intuitive abilities to teach children, and adults, to understand probability better and to make better decisions as a result. 2013
TRIAL AND ERROR IN TODDLERS AND SCIENTISTS The Gopnik lab is rejoicing. My student Caren Walker and I have just published a paper in the well known journal Psychological Science. Usually when I write about scientific papers here, they sound neat and tidy. But since this was our own experiment, I can tell you the messy inside story too. First, the study—and a small IQ test for you. Suppose you see an experimenter put two orange blocks on a machine, and it lights up. She then puts a green one and a blue one on the same machine, but nothing happens. Two red ones work, a black and white combination doesn't. Now you have to make the machine light up yourself. You can choose two purple blocks or a yellow one and a brown one. But this simple problem actually requires some very abstract thinking. It's not that any particular block makes the machine go. It's the fact that the blocks are the same rather than different. Other animals have a very hard time understanding this. Chimpanzees can get hundreds of examples and still not get it, even with delicious bananas as a reward. As a clever (or even not so clever) reader of this newspaper, you'd surely choose the two purple blocks. The conventional wisdom has been that young children also can't learn this kind of abstract logical principle. Scientists like Jean Piaget believed that young children's thinking was concrete and superficial. And in earlier studies, preschoolers couldn't solve this sort of "same/different" problem. But in those studies, researchers asked children to say what they thought about pictures of objects. Children often look much smarter when you watch what they do instead of relying on what they say. We did the experiment I just described with 18-to-24-month-olds. And they got it right, with just two examples. The secret was showing them real blocks on a real machine and asking them to use the blocks to make the machine go. Tiny toddlers, barely walking and talking, could quickly learn abstract relationships. And they understood "different" as well as "same." If you reversed the examples so that the two different blocks made the machine go, they would choose the new, "different" pair. The brilliant scientists of the Gopnik lab must have realized that babies could do better than prior research suggested and so designed this elegant experiment, right? Not exactly. Here's what really happened: We were doing a totally different experiment. My student Caren wanted to see whether getting children to explain an event made them think about it more abstractly. We thought that a version of the "same block" problem would be tough for 4-year-olds and having them explain might help. We actually tried a problem a bit simpler than the one I just described, because the experimenter put the blocks on the machine one at a time instead of simultaneously. The trouble was that the 4-year-olds had no trouble at all! Caren tested 3-year-olds, then 2-year-olds and finally the babies, and they got it too. We sent the paper to the journal. All scientists occasionally (OK, more than occasionally) curse journal editors and reviewers, but they contributed to the discovery too. They insisted that we do the more difficult simultaneous version of the task with babies and that we test "different" as well as "same." So we went back to the lab, muttering that the "different" task would be too hard. But we were wrong again. Now we are looking at another weird result. Although the 4-year-olds did well on the easier sequential task, in a study we're still working on, they actually seem to be doing worse than the babies on the harder simultaneous one. So there's a new problem for us to solve. Scientists legitimately worry about confirmation bias, our tendency to look for evidence that fits what we already think. But, fortunately, learning is most fun, for us and 18-month-olds too, when the answers are most surprising. Scientific discoveries aren't about individual geniuses miraculously grasping the truth. Instead, they come when we all chase the unexpected together.
THE BRAIN'S CROWDSOURCING SOFTWARE Over the past decade, popular science has been suffering from neuromania. The enthusiasm came from studies showing that particular areas of the brain “light up” when you have certain thoughts and experiences. It’s mystifying why so many people thought this explained the mind. What have you learned when you say that someone’s visual areas light up when they see things? People still seem to be astonished at the very idea that the brain is responsible for the mind—a bunch of grey goo makes us see! It is astonishing. But scientists knew that a century ago; the really interesting question now is how the grey goo lets us see, think and act intelligently. New techniques are letting scientists understand the brain as a complex, dynamic, computational system, not just a collection of individual bits of meat associated with individual experiences. These new studies come much closer to answering the “how” question. Take a study in the journal Nature this year by Stefano Fusi of Columbia University College of Physicians and Surgeons, Earl K. Miller of the Massachusetts Instutute of Technology and their colleagues. Fifty years ago David Hubel and Torsten Weisel made a great Nobel Prize-winning discovery. They recorded the signals from particular neurons in cats’ brains as the animals looked at different patterns. The neurons responded selectively to some images rather than others. One neuron might only respond to lines that slanted right, another only to those slanting left. But many neurons don’t respond in this neatly selective way. This is especially true for the neurons in the parts of the brain that are associated with complex cognition and problem-solving, like the prefrontal cortex. Instead, these cells were a mysterious mess—they respond idiosyncratically to different complex collections of features. What were these neurons doing? In the new study the researchers taught monkeys to remember and respond to one shape rather than another while they recorded their brain activity. But instead of just looking at one neuron at a time, they recorded the activity of many prefrontal neurons at once. A number of them showed weird, messy “mixed selectivity” patterns. One neuron might respond when the monkey remembered just one shape or only when it recognized the shape but not when it recalled it, while a neighboring cell showed a different pattern. In order to analyze how the whole group of cells worked the researchers turned to the techniques of computer scientists who are trying to design machines that can learn. Computers aren’t made of carbon, of course, let alone neurons. But they have to solve some of the same problems, like identifying and remembering patterns. The techniques that work best for computers turn out to be remarkably similar to the techniques that brains use. Essentially, the researchers found the brain was using the same general sort of technique that Google uses for its search algorithm. You might think that the best way to rank search results would be to pick out a few features of each Web page like “relevance” or “trustworthiness’”—in the same way as the neurons picked out whether an edge slanted right or left. Instead, Google does much better by combining all the many, messy, idiosyncratic linking decisions of individual users. With neurons that detect just a few features, you can capture those features and combinations of features, but not much more. To capture more complex patterns, the brain does better by amalgamating and integrating information from many different neurons with very different response patterns. The brain crowd-sources. Scientists have long argued that the mind is more like a general software program than like a particular hardware set-up. The new combination of neuroscience and computer science doesn’t just tell us that the grey goo lets us think, or even exactly where that grey goo is. Instead, it tells us what programs it runs. Scientists are getting a clearer idea of what ‘programs’ the mind runs.
DRUGGED-OUT MICE OFFER INSIGHT INTO THE GROWING BRAIN Imagine a scientist peeking into the skulls of glow-in-the-dark, cocaine-loving mice and watching their nerve cells send out feelers. It may sound more like something from cyberpunk writer William Gibson than from the journal Nature Neuroscience. But this kind of startling experiment promises to change how we think about the brain and mind. Scientific progress often involves new methods as much as new ideas. The great methodological advance of the past few decades was Functional Magnetic Resonance Imaging: It lets scientists see which areas of the brain are active when a person thinks something. But scientific methods can also shape ideas, for good and ill. The success of fMRI led to a misleadingly static picture of how the brain works, particularly in the popular imagination. When the brain lights up to show the distress of a mother hearing her baby cry, it's tempting to say that motherly concern is innate. But that doesn't follow at all. A learned source of distress can produce the same effect. Logic tells you that every time we learn something, our brains must change, too. In fact, that kind of change is the whole point of having a brain in the first place. The fMRI pictures of brain areas "lighting up" don't show those changes. But there are remarkable new methods that do, at least for mice. Slightly changing an animal's genes can make it produce fluorescent proteins. Scientists can use a similar technique to make mice with nerve cells that light up. Then they can see how the mouse neurons grow and connect through a transparent window in the mouse's skull. The study that I cited from Nature Neuroscience, by Linda Wilbrecht and her colleagues, used this technique to trace one powerful and troubling kind of learning—learning to use drugs. Cocaine users quickly learn to associate their high with a particular setting, and when they find themselves there, the pull of the drug becomes particularly irresistible. First, the researchers injected mice with either cocaine or (for the control group) salt water and watched what happened to the neurons in the prefrontal part of their brains, where decisions get made. The mice who got cocaine developed more "dendritic spines" than the other mice—their nerve cells sent out more potential connections that could support learning. So cocaine, just by itself, seems to make the brain more "plastic," more susceptible to learning. But a second experiment was even more interesting. Mice, like humans, really like cocaine. The experimenters gave the mice cocaine on one side of the cage but not the other, and the mice learned to go to that side of the cage. The experimenters recorded how many new neural spines were formed and how many were still there five days later. All the mice got the same dose of cocaine, but some of them showed a stronger preference for the cocaine side of the cage than others—they had learned the association between the cage and the drug better. The mice who learned better were much more likely to develop persistent new spines. The changes in behavior were correlated to changes in the brain. It could be that some mice were more susceptible to the effects of the cocaine, which produced more spines, which made them learn better. Or it could be that the mice who were better learners developed more persistent spines. We don't know how this drug-induced learning compares to more ordinary kinds of learning. But we do know, from similar studies, that young mice produce and maintain more new spines than older mice. So it may be that the quick, persistent learning that comes with cocaine, though destructive, is related to the profound and extensive learning we see early in life, in both mice and men.
IS IT POSSIBLE TO REASON ABOUT HAVING A CHILD? How can you decide whether to have a child? It’s a complex and profound question—a philosophical question. But it’s not a question traditional philosophers thought about much. In fact, the index of the 1967 “Encyclopedia of Philosophy” had only four references to children at all—though there were hundreds of references to angels. You could read our deepest thinkers and conclude that humans reproduced through asexual cloning. Recently, though, the distinguished philosopher L.A. Paul (who usually works on abstruse problems in the metaphysics of causation) wrote a fascinating paper, forthcoming in the journal Res Philosophica. Prof. Paul argues that there is no rational way to decide to have children—or not to have them. How do we make a rational decision? The classic answer is that we imagine the outcomes of different courses of action. Then we consider both the value and the probability of each outcome. Finally, we choose the option with the highest “utilities,” as the economists say. Does the glow of a baby’s smile outweigh all those sleepless nights? It’s not just economists. You can find the same picture in the advice columns of Vogue and Parenting. In the modern world, we assume that we can decide whether to have children based on what we think the experience of having a child will be like. But Prof. Paul thinks there’s a catch. The trouble is that, notoriously, there is no way to really know what having a child is like until you actually have one. You might get hints from watching other people’s children. But that overwhelming feeling of love for this one particular baby just isn’t something you can understand beforehand. You may not even like other people’s children and yet discover that you love your own child more than anything. Of course, you also can’t really understand the crushing responsibility beforehand, either. So, Prof. Paul says, you just can’t make the decision rationally. I think the problem may be even worse. Rational decision-making assumes there is a single person with the same values before and after the decision. If I’m trying to decide whether to buy peaches or pears, I can safely assume that if I prefer peaches now, the same “I” will prefer them after my purchase. But what if making the decision turns me into a different person with different values? Part of what makes having a child such a morally transformative experience is the fact that my child’s well-being can genuinely be more important to me than my own. It may sound melodramatic to say that I would give my life for my children, but, of course, that’s exactly what every parent does all the time, in ways both large and small. Once I commit myself to a child, I’m literally not the same person I was before. My ego has expanded to include another person even though—especially though—that person is utterly helpless and unable to reciprocate. The person I am before I have children has to make a decision for the person I will be afterward. If I have kids, chances are that my future self will care more about them than just about anything else, even her own happiness, and she’ll be unable to imagine life without them. But, of course, if I don’t have kids, my future self will also be a different person, with different interests and values. Deciding whether to have children isn’t just a matter of deciding what you want. It means deciding who you’re going to be. L.A. Paul, by the way, is, like me, both a philosopher and a mother—a combination that’s still surprisingly rare. There are more and more of us, though, so maybe the 2067 Encyclopedia of Philosophy will have more to say on the subject of children. Or maybe even philosopher-mothers will decide it’s easier to stick to thinking about angels.
THE GORILLA LURKING IN OUR CONSCIOUSNESS Imagine that you are a radiologist searching through slides of lung tissue for abnormalities. On one slide, right next to a suspicious nodule, there is the image of a large, threatening, gorilla. What would you do? Write to the American Medical Association? Check yourself into the schizophrenia clinic next door? Track down the practical joker among the lab technicians? In fact, you probably wouldn’t do anything. That is because, although you were staring right at the gorilla, you probably wouldn’t have seen it. That startling fact shows just how little we understand about consciousness. In the journal Psychological Science, Trafton Drew and colleagues report that they got radiologists to look for abnormalities in a series of slides, as they usually do. But then they added a gorilla to some of the slides. The gorilla gradually faded into the slides and then gradually faded out, since people are more likely to notice a sudden change than a gradual one. When the experimenters asked the radiologists if they had seen anything unusual, 83% said no. An eye-tracking machine showed that radiologists missed the gorilla even when they were looking straight at it. This study is just the latest to demonstrate what psychologists call “inattentional blindness.” When we pay careful attention to one thing, we become literally blind to others—even startling ones like gorillas. In one classic study, Dan Simons and Christopher Chabris showed people a video of students passing a ball around. They asked the viewers to count the number of passes, so they had to pay attention to the balls. In the midst of the video, someone in a gorilla suit walked through the players. Most of the viewers, who were focused on counting the balls, didn’t see the gorilla at all. You can experience similar illusions yourself at invisiblegorilla.com. It is an amazingly robust phenomenon—I am still completely deceived by each new example. You might think this is just a weird thing that happens with videos in a psychology lab. But in the new study, the radiologists were seasoned professionals practicing a real and vitally important skill. Yet they were also blind to the unexpected events. In fact, we are all subject to inattentional blindness all the time. That is one of the foundations of magic acts. Psychologists have started collaborating with professional magicians to figure out how their tricks work. It turns out that if you just keep your audience’s attention focused on the rabbit, they literally won’t even see what you’re doing with the hat. Inattentional blindness is as important for philosophers as it is for radiologists and magicians. Many philosophers have claimed that we can’t be wrong about our conscious experiences. It certainly feels that way. But these studies are troubling. If you asked the radiologist about the gorilla, she’d say that she just experienced a normal slide in exactly the way she experienced the other slides—except that we know that can’t be true. Did she have the experience of seeing the gorilla and somehow not know it? Or did she experience just the part of the slide with the nodule and invent the gorilla-free remainder? At this very moment, as I stare at my screen and concentrate on this column, I’m absolutely sure that I’m also experiencing the whole visual field—the chair, the light, the view out my window. But for all I know, invisible gorillas may be all around me. Many philosophical arguments about consciousness are based on the apparently certain and obvious intuitions we have about our experience. This includes, of course, arguments that consciousness just couldn’t be explained scientifically. But scientific experiments like this one show that those beautifully clear and self-evident intuitions are really incoherent and baffling. We will have to wrestle with many other confusing, tricky, elusive gorillas before we understand how consciousness works.
HOW TO GET CHILDREN TO EAT VEGGIES To parents, there is no force known to science as powerful as the repulsion between children and vegetables. Of course, just as supercooling fluids can suspend the law of electrical resistance,
melting cheese can suspend the law of vegetable resistance. This is sometimes known as the Pizza Paradox.
There is also the Edamame Exception, but this is generally considered to be due to the Snack Uncertainty Principle,
by which a crunchy soybean is and is not a vegetable simultaneously.
But when melty mozzarella conditions don’t apply, the law of vegetable repulsion would appear to be as immutable as gravity,
magnetism or the equally mysterious law of child-godawful mess attraction.
In a new paper in Psychological Science, however, Sarah Gripshover and Ellen Markman of Stanford University have shown that scientists can
overcome the child-vegetable repulsive principle. Remarkably, the scientists in question are the children themselves. It turns out that,
by giving preschoolers a new theory of nutrition, you can get them to eat more vegetables.
My colleagues and I have argued that very young children construct intuitive theories of the world around them
(my first book was called “The Scientist in the Crib”). These theories are coherent, causal representations of how things or people or
animals work. Just like scientific theories, they let children make sense of the world, construct predictions and design
intelligent actions.
Preschoolers already have some of the elements of an intuitive theory of biology.
They understand that invisible germs can make you sick and that eating helps make you healthy, even if they don’t get all the details.
One little boy explained about a peer, “He needs more to eat because he is growing long arms.”
The Stanford researchers got teachers to read 4- and 5-year olds a series of story books for several weeks.
The stories gave the children a more detailed but still accessible theory of nutrition.
They explained that food is made up of different invisible parts, the equivalent of nutrients; that when you eat,
your body breaks up the food into those parts; and that different kinds of food have different invisible parts.
They also explained that your body needs different nutrients to do different things,
so that to function well you need to take in a lot of different nutrients.
In a control condition, the teachers read children similar stories based on the current United States Department of Agriculture website
for healthy nutrition. These stories also talked about healthy eating and encouraged it.
But they didn’t provide any causal framework to explain how eating works or why you should eat better.
The researchers also asked children questions to test whether they had acquired a deeper understanding of nutrition.
And at snack time they offered the children vegetables as well as fruit, cheese and crackers.
The children who had heard the theoretical stories understood the concepts better.
More strikingly, they also were more likely to pick the vegetables at snack time.
We don’t yet know if this change in eating habits will be robust or permanent, but a number of other recent studies
suggest that changing children’s theories can actually change their behavior too.
A quick summary of 30 years of research in developmental psychology yields two big propositions:
Children are much smarter than we thought, and adults are much stupider.
Studies like this one suggest that the foundations of scientific thinking—causal inference, coherent explanation,
and rational prediction—are not a creation of advanced culture but our evolutionary birthright.
THE WORDSWORTHS: CHILD PSYCHOLOGISTS
Last week, I made a pilgrimage to Dove Cottage—a tiny white house nestled among
the meres and fells of England's Lake District. William Wordsworth and his
sister Dorothy lived there while they wrote two of my favorite books: his
"Lyrical Ballads" and her journal—both masterpieces of Romanticism.
The Romantics celebrated the sublime—an altered, expanded, oceanic state of
consciousness. Byron and Shelley looked for it in sex. Wordsworth's friends,
Coleridge and De Quincey, tried drugs (De Quincey's opium scales sit next to
Dorothy's teacups in Dove Cottage).
But Wordsworth identified this exalted state with the very different world of
young children. His best poems describe the "splendor in the grass," the "glory
in the flower," of early childhood experience. His great "Ode: Intimations of
Immortality From Recollections of Early Childhood" begins: There was a time
when meadow, grove, and stream, / The earth, and every common sight, / To me did
seem / Apparell'd in celestial light, / The glory and the freshness of a dream.
This picture of the child's mind is remarkably close to the newest scientific
picture. Children's minds and brains are designed to be especially open to
experience. They're unencumbered by the executive planning, focused attention
and prefrontal control that fuels the mad endeavor of adult life, the getting
and spending that lays waste our powers (and, to be fair, lets us feed our
children).
This makes children vividly conscious of "every common sight" that habit has
made invisible to adults. It might be Wordsworth's meadows or the dandelions and
garbage trucks that enchant my 1-year-old grandson.
It's often said that the Romantics invented childhood, as if children had merely
been small adults before. But scientifically speaking, Wordsworth discovered
childhood—he saw children more clearly than others had. Where did this insight
come from? Mere recollection can't explain it. After all, generations of poets
and philosophers had recollected early childhood and seen only confusion and
limitation.
I suspect it came at least partly from his sister Dorothy. She was an
exceptionally sensitive and intelligent observer, and the descriptions she
recorded in her journal famously made their way into William's poems. He said
that she gave him eyes and ears. Dorothy was also what the evolutionary
anthropologist Sarah Hrdy calls an "allomother." All her life, she devotedly
looked after other people's children and observed their development.
In fact, when William was starting to do his greatest work, he and Dorothy were
looking after a toddler together. They rescued 4-year-old Basil Montagu from his
irresponsible father, who paid them 50 pounds a year to care for him. The young
Wordsworth earned more as a nanny than as a poet. Dorothy wrote about Basil—"I
do not think there is any pleasure more delightful than that of marking the
development of a child's faculties." It could be the credo of every
developmental psychologist.
There's been much prurient speculation about whether Dorothy and William slept
together. But very little has been written about the undoubted fact that they
raised a child together.
For centuries the people who knew young children best were women. But, sexism
aside, just bearing and rearing children was such overwhelming work that it left
little time for thinking or writing about them, especially in a world without
birth control, vaccinations or running water.
Dorothy was a thinker and writer who lived intimately with children but didn't
bear the full, crushing responsibility of motherhood. Perhaps she helped William
to understand children's minds so profoundly and describe them so eloquently.
IMPLICIT RACIAL BIAS IN PRESCHOOLERS Are human beings born good and corrupted by society or
born bad and redeemed by civilization? Lately, goodness has been on a roll,
scientifically speaking. It turns out that even 1-year-olds already sympathize
with the distress of others and go out of their way to help them. But the most recent work suggests that the origins of
evil may be only a little later than the origins of good. New studies show that even young children discriminate. Our impulse to love and help the members of our own
group is matched by an impulse to hate and fear the members of other groups. In
"Gulliver's Travels," Swift described a vicious conflict between the Big-Enders,
who ate their eggs with the big end up, and the Little-Enders, who started from
the little end. Historically, largely arbitrary group differences (Catholic vs.
Protestant, Hutu vs. Tutsi) have led to persecution and even genocide. When and why does this particular human evil arise? A
raft of new studies shows that even 5-year-olds discriminate between what
psychologists call in-groups and out-groups. Moreover, children actually seem to
learn subtle aspects of discrimination in early childhood. In a recent paper, Yarrow Dunham at Princeton and
colleagues explored when children begin to have negative thoughts about other
racial groups. White kids aged 3 to 12 and adults saw computer-generated,
racially ambiguous faces. They had to say whether they thought the face was
black or white. Half the faces looked angry, half happy. The adults were more
likely to say that angry faces were black. Even people who would hotly deny any
racial prejudice unconsciously associate other racial groups with anger. But what about the innocent kids? Even 3- and
4-year-olds were more likely to say that angry faces were black. In fact,
younger children were just as prejudiced as older children and adults. Is this just something about white attitudes toward
black people? They did the same experiment with white and Asian faces. Although
Asians aren't stereotypically angry, children also associated Asian faces with
anger. Then the researchers tested Asian children in Taiwan with exactly the
same white and Asian faces. The Asian children were more likely to think that
angry faces were white. They also associated the out-group with anger, but for
them the out-group was white. Was this discrimination the result of some universal,
innate tendency or were preschoolers subtly learning about discrimination? For
black children, white people are the out-group. But, surprisingly, black
children (and adults) were the only ones to show no bias at all; they
categorized the white and black faces in the same way. The researchers suggest
that this may be because black children pick up conflicting signals—they know
that they belong to the black group, but they also know that the white group has
higher status. These findings show the deep roots of group conflict.
But the last study also suggests that somehow children also quickly learn about
how groups are related to each other. Learning also was important in another way. The
researchers began by asking the children to categorize unambiguously white,
black or Asian faces. Children began to differentiate the racial groups at
around age 4, but many of the children still did not recognize the racial
categories. Moreover, children made the white/Asian distinction at a later age
than the black/white distinction. Only children who recognized the racial
categories were biased, but they were as biased as the adults tested at the same
time. Still, it took kids from all races a while to learn those categories. The studies of early altruism show that the natural
state of man is not a war of all against all, as Thomas Hobbes said. But it may
quickly become a war of us against them.
NATURE, CULTURE AND GAY MARRIAGE There's been a lot of talk about nature in the
gay-marriage debate. Opponents point to the "natural" link between heterosexual
sex and procreation. Supporters note nature's staggering diversity of sexual
behavior and the ubiquity of homosexual sex in our close primate relatives. But,
actually, gay marriage exemplifies a much more profound part of human nature:
our capacity for cultural evolution. The birds and the bees may be enough for the birds and
the bees, but for us it's just the beginning. Culture is our nature; the evolution of culture was one
secret of our biological success. Evolutionary theorists like the philosopher
Kim Sterelny, the biologist Kevin Laland and the psychologist Michael Tomasello
emphasize our distinctively human ability to transmit new information and social
practices from generation to generation. Other animals have more elements of
culture than we once thought, but humans rely on cultural transmission far more
than any other species Still, there's a tension built into cultural evolution.
If the new generation just slavishly copies the previous one this process of
innovation will seize up. The advantage of the "cultural ratchet" is that we can
use the discoveries of the previous generation as a jumping-off point for
revisions and discoveries of our own. Man may not be The Rational Animal, but we are The
Empirical Animal—perpetually revising what we do in the light of our experience. Studies show that children have a distinctively human
tendency to precisely imitate what other people do. But they also can choose
when to imitate exactly, when to modify what they've seen, and when to try
something brand new. Human adolescence, with its risk-taking and exploration,
seems to be a particularly important locus of cultural innovation.
Archaeologists think teenagers may have been the first cave-painters. We can
even see this generational effect in other primates. Some macaque monkeys
famously learned how to wash sweet potatoes and passed this skill to others. The
innovator was the equivalent of a preteen girl, and other young macaques were
the early adopters. As in biological evolution, there is no guarantee that
cultural evolution will always move forward, or that any particular cultural
tradition or innovation will prove to be worth preserving. But although the arc
of cultural evolution is long and irregular, overall it does seem to bend toward
justice, or, at least, to human thriving. Gay marriage demonstrates this dynamic of tradition and
innovation in action. Marriage has itself evolved. It was once an institution
that emphasized property and inheritance. It has become one that provides a way
of both expressing and reinforcing values of commitment, loyalty and stability.
When gay couples want marriage, rather than just civil unions, its precisely
because they endorse those values and want to be part of that tradition. At the same time, as more and more people have
courageously come out, there have been more and more gay relationships to
experience. That experience has led most of the millennial generation to
conclude that the link between marital tradition and exclusive heterosexuality
is unnecessary, indeed wrong. The generational shift at the heart of cultural
evolution is especially plain. Again and again, parents report that they're
being educated by their children. It's ironic that the objections to gay marriage center
on child-rearing. Our long protected human childhood, and the nurturing and
investment that goes with it, is, in fact, exactly what allows social learning
and cultural evolution. Nurture, like culture, is also our nature. We nurture
our children so that they can learn from our experience, but also so that
subsequent generations can learn from theirs. Marriage and family are institutions designed, at least in part, to help create an autonomous new generation, free to try to make better, more satisfying kinds of marriage and family for the generations that follow.
SLEEPING AND LEARNING LIKE A BABY
Babies and children sleep a lot—12 hours a day or so to our eight. But why would
children spend half their lives in a state of blind, deaf paralysis punctuated
by insane hallucinations? Why, in fact, do all higher animals surrender their
hard-won survival abilities for part of each day?
Children themselves can be baffled and indignant about the way that sleep robs
them of consciousness. We weary grown-ups may welcome a little oblivion, but at
nap time, toddlers will rage and rage against the dying of the light.
Part of the answer is that sleep helps us to learn. It may just be too hard for
a brain to take in the flood of new experiences and make sense of them at the
same time. Instead, our brains look at the world for a while and then shut out
new input and sort through what they have seen.
Children learn in a particularly profound way. Some remarkable experiments show
that even tiny babies can take in a complex statistical pattern of data and
figure out the rules and principles that explain the pattern. Sleep seems to
play an especially important role in this kind of learning.
In 2006, Rebecca Gómez and her colleagues at the University of Arizona taught
15-month-old babies a made-up language. The babies listened to 240 "sentences"
made of nonsense words, like "Pel hiftam jic" or "Pel lago jic." Like real
sentences, these sentences followed rules. If "pel" was the first word, for
instance, "jic" would always be the third one.
Half the babies heard the sentences just before they had a nap, and the other
half heard them just after they woke up, and they then stayed awake.
Four hours later, the experimenters tested whether the babies had learned the
"first and third" rule by seeing how long the babies listened to brand-new
sentences. Some of the new sentences followed exactly the same rule as the
sentences that the babies had heard earlier. Some also followed a "first and
third" rule that used different nonsense words.
Remarkably, the babies who had stayed awake had learned the specific rules
behind the sentences they heard four hours before—like the rule about "pel" and
"jic." Even more remarkably, the babies who had slept after the instruction
seemed to learn the more abstract principle that the first and third words were
important, no matter what those words actually were.
Just this month, a paper by Ines Wilhelm at the University of Tübingen and
colleagues showed that older children also learn in their sleep. In fact, they
learn better than grown-ups. They showed 8-to-11-year-olds and adults a grid of
eight lights that lit up over and over in a particular sequence. Half the
participants saw the lights before bedtime, half saw them in the morning. After
10 to 12 hours, the experimenters asked the participants to describe the
sequence. The children and adults who had stayed awake got about half the
transitions right, and the adults who had slept were only a little better. But
the children who had slept were almost perfect—they learned substantially better
than either group of adults.
There was another twist. While the participants slept, they wore an electronic
cap to measure brain activity. The children had much more "slow-wave sleep" than
the adults—that's an especially deep, dreamless kind of sleep. And both children
and adults who had more slow-wave sleep learned better.
Children may sleep so much because they have so much to learn (though toddlers
may find that scant consolation for the dreaded bedtime). It's paradoxical to
try to get children to learn by making them wake up early to get to school and
then stay up late to finish their homework.
Colin Powell reportedly said that on the eve of the Iraq war he was sleeping
like a baby—he woke up every two hours screaming. But really sleeping like a
baby might make us all smarter.
|