Alison Gopnik The Wall Street Journal Columns
Mind & Matter, now once per month
(Click on the title for text, or on the date for link to The Wall Street Journal *)
The Many Minds of the Octopus (15 Apr 2021)
The Power of the Wandering Mind (25 Feb 2021)
Our Sense of Fairness Is Beyond Politics (21 Jan 2021)
Despite Covid-19, Older People Are Still Happier (11 Dec 2020)
What AI Can Learn From Parents (5 Nov 2020)
Innovation Relies on Imitation (1 Oct 2020)
A Good Life Doesn't Mean an Easy One (28 Aug 2020)
Learning Without a Brain (23 Jul 2020)
Why Elders Are Indispensable for All of Us (12 Jun 2020)
How Humans Evolved to Care for Others (16 Apr 2020)
Detecting Fake News Takes Time (20 Feb 2020)
Humans Evolved to Love Baby Yoda (16 Jan 2020)
Why the Old Look Down on the Young (5 Dec 2019)
Parents Need to Help Their Children Take Risks (24 Oct 2019)
Teenage Rebels with a Cause (12 Sep 2019)
How Early Do Cultural Differences Start? (11 Jul 2019)
The Explosive Evolution of Consciousness (5 Jun 2019)
Psychedelics as a Path to Social Learning (25 Apr 2019)
What AI Is Still Far From Figuring Out (20 Mar 2019)
Young Children Make Good Scientists (14 Feb 2019)
A Generational Divide in the Uncanny Valley (10 Jan 2019)
For Gorillas, Being a Good Dad Is Sexy (30 Nov 2018)
The Cognitive Advantages of Growing Older (2 Nov 2018)
Imaginary Worlds of Childhood (20 Sep 2018)
Like Us, Whales May Be Smart Because They're Social (16 Aug 2018)
For Babies, Life May Be a Trip (18 Jul 2018)
Who's Most Afraid to Die? A Surprise (6 Jun 2018)
Curiosity Is a New Power in Artificial Intelligence (4 May 2018)
Grandparents: The Storytellers Who Bind Us (29 Mar 2018)
Are Babies Able to See What Others Feel? (22 Feb 2018)
What Teenagers Gain from Fine-Tuned Social Radar (18 Jan 2018)
The Smart Butterfly's Guide to Reproduction (6 Dec 2017)
The Power of Pretending: What Would a Hero Do? (1 Nov 2017)
The Potential of Young Intellect, Rich or Poor (29 Sep 2017)
Do Men and Women Have Different Brains (25 Aug 2017)
Whales Have Complex Culture, Too (3 Aug 2017)
How to Get Old Brains to Think Like Young Ones (7 Jul 2017)
What the Blind See (and Don't) When Given Sight (8 Jun 2017)
How Much Do Toddlers Learn From Play? (11 May 2017)
The Science of 'I Was just Following Orders' (12 Apr 2017)
How Much Screen Time Is Safe for Teens? (17 Mar 2017)
When Children Beat Adults at Seeing the World (16 Feb 2017)
Flying High: Research Unveils Birds' Learning Power (18 Jan 2017)
When Awe-Struck, We Feel Both Smaller and Larger (22 Dec 2016)
The Brain Machinery Behind Daydreaming (23 Nov 2016)
Babies Show a Clear Bias--To Learn New Things (26 Oct 2016)
Our Need to Make and Enforce Rules Starts Very Young (28 Sep 2016)
Should We Let Toddlers Play with Saws and Knives? (31 Aug 2016)
Want Babies to Learn from Video? Try Interactive (3 Aug 2016)
A Small Fix in Mind-Set Can Keep Students in School (16 Jun 2016)
Aliens Rate Earth: Skip the Primates, Come for the Crows (18 May 2016)
The Psychopath, the Altruist and the Rest of Us (21 Apr 2016)
Young Mice, Like Children, Can Grow Up Too Fast (23 Mar 2016)
How Babies Know That Allies Can Mean Power (25 Feb 2016)
To Console a Vole: A Rodent Cares for Others (26 Jan 2016)
Science Is Stepping Up the Pace of Innovation (1 Jan 2016)
Giving Thanks for the Innovation That Saves Babies (25 Nov 2015)
Who Was That Ghost? Science's Reassuring Reply (28 Oct 2015)
Is Our Identity in Intellect, Memory or Moral Character? (9 Sep 2015)
Babies Make Predictions, Too (12 Aug 2015)
Aggression in Children Makes Sense - Sometimes (16 July 2015)
Smarter Every Year? Mystery of the Rising IQs (27 May 2015)
Brains, Schools and a Vicious Cycle of Poverty (13 May 2015)
The Mystery of Loyalty, in Life and on 'The Americans' (1 May 2015)
How 1-Year-Olds Figure Out the World (15 Apr 2015)
How Children Develop the Idea of Free Will (1 Apr 2015)
How We Learn to Be Afraid of the Right Things (18 Mar 2015)
Learning From King Lear: The Saving Grace of Low Status (4 Mar 2015)
The Smartest Questions to Ask About Intelligence (18 Feb 2015)
The Dangers of Believing that Talent Is Innate (4 Feb 2015)
What a Child Can Teach a Smart Computer (22 Jan 2015)
Why Digital-Movie Effects Still Can't Do a Human Face (8 Jan 2015)
DNA and the Randomness of Genetic Problems (25 Nov 2014 - out of order)
How Children Get the Christmas Spirit (24 Dec 2014)
Who Wins When Smart Crows and Kids Match Wits? (10 Dec 2014)
How Humans Learn to Communicate with Their Eyes (19 Nov 2014)
A More Supportive World Can Work Wonders for the Aged (5 Nov 2014)
What Sends Teens Toward Triumph or Tribulation (22 Oct 2014)
Campfires Helped Inspire Community Culture (8 Oct 2014)
Poverty's Vicious Cycle Can Affect Our Genes (24 Sept 2014)
Humans Naturally Follow Crowd Behavior (12 Sept 2014)
Even Children Get More Outraged at 'Them' Than at 'Us' (27 Aug 2014)
In Life, Who WIns, the Fox or the Hedgehog? (15 Aug 2014)
Do We Know What We See? (31 July 2014)
Why Is It So Hard for Us to Do Nothing? (18 July 2014)
A Toddler's Souffles Aren't Just Child's Play (3 July 2014)
For Poor Kids, New Proof That Early help Is Key (13 June 2014)
Rice, Wheat and the Values They Sow (30 May 2014)
What Made Us Human? Perhaps Adorable Babies (16 May 2014)
Grandmothers: The Behind-the-Scenes Key to Human Culture? (2 May 2014)
See Jane Evolve: Picture Books Explain Darwin (18 Apr 2014)
Scientists Study Why Stories Exist (4 Apr 2014)
The Kid Who Wouldn't Let Go of 'The Device' (21 Mar 2014)
Why You're Not as Clever as a 4-Year-Old (7 Mar 2014)
Are Schools Asking to Drug Kids for Better Test Scores? (21 Feb 2014)
The Psychedelic Road to Other Conscious States (7 Feb 2014)
Time to Retire the Simplicity of Nature vs. Nurture (24 Jan 2014)
The Surprising Probability Gurus Wearing Diapers (10 Jan 2014)
What Children Really Think About Magic (28 Dec 2013)
Trial and Error in Toddlers and Scientists (14 Dec 2013)
Gratitude for the Cosmic Miracle of A Newborn Child (29 Nov 2013)
The Brain's Crowdsourcing Software (16 Nov 2013)
World Series Recap: May Baseball's Irrational Heart Keep On Beating (2 Nov 2013)
Drugged-out Mice Offer Insight into the Growing Brain (4 Oct 2013)
Poverty Can Trump a Winning Hand of Genes (20 Sep 2013)
Is It Possible to Reason about Having a Child? (7 Sep 2013)
Even Young Children Adopt Arbitrary Rituals (24 Aug 2013)
The Gorilla Lurking in Our Consciousness (9 Aug 2013)
Does Evolution Want Us to Be Unhappy? (27 Jul 2013)
How to Get Children to Eat Veggies (13 Jul 2013)
What Makes Some Children More Resilient? (29 Jun 2013)
Wordsworth, The Child Psychologist (15 Jun 2013)
Zazes, Flurps and the Moral World of Kids (31 May 2013)
How Early Do We Learn Racial 'Us and Them'? (18 May 2013)
How the Brain Really Works (4 May 2013)
Culture Begets Marriage - Gay or Straight (21 Apr 2013)
For Innovation, Dodge the Prefrontal Police (5 Apr 2013)
Sleeping Like a Baby, Learning at Warp Speed (22 Mar 2013)
Why Are Our Kids Useless? Because We're Smart (8 Mar 2013)
Cephalopods are having a moment. An octopus stars in a documentary nominated for an Academy Award (“My Octopus Teacher”). Octos, as scuba-diving philosopher Peter Godfrey Smith calls them, also play a leading role in his marvelous new book “Metazoa,” alongside a supporting cast of corals, sponges, sharks and crabs. (I like Mr. Godfrey-Smith’s plural, which avoids the tiresome debate over Latin and Greek endings).
Part of the allure of the octos is that they are both very smart, probably the smartest of invertebrates, and extremely weird. The intelligence and weirdness may be connected and can perhaps teach us something about those other intelligent, weird animals we call homo sapiens
Smart birds and mammals tend to have long lives and an especially long, protected childhood. Crows and chimps put a lot of work into taking care of their helpless babies. But, sadly and strangely, the intelligent octos only live for a year and don’t really have a childhood at all. They die soon after reproducing and, like the spider heroine of “Charlotte’s Web,” don’t even live to see the next generation grow up, let alone to look after them.
Smart birds and mammals also keep their neurons in one place—their brains. But octos split them up. They have over 500 million neurons altogether, about as many as dogs. But there are as many neurons altogether in their eight arms as in their heads. The arms seem able to act as independent agents, waving and wandering, exploring and sensing the world around them—even reaching out to the occasional diving philosopher or filmmaker. Mr. Godfrey-Smith’s book has a fascinating discussion of how it must feel to have this sort of split consciousness, nine selves all inhabiting the same body.
I think there might be a link between these two strange facts of octopus life. I’ve previously argued that childhood and intelligence are correlated because of what computer scientists call the “explore-exploit” trade-off: It’s very difficult to design a single system that’s curious and imaginative—that is, good at exploring—and at the same time, efficient and effective—or good at exploiting. Childhood gives animals a chance to explore and learn first; then when they grow up, they can exploit what they’ve learned to get things done.
Childhood isn’t the only way to solve the explore-exploit problem. Bees, who, like octos, are smart but short-lived, use a division of labor, with scouts who explore and workers who exploit. But octos are much more solitary than bees.
The evolutionary path that led to the octos diverged from the human one hundreds of millions of years ago, before the first animal crawled out of the sea. They must have developed a different way to solve the explore-exploit dilemma. Perhaps their eight-plus-one brains serve the same function as the different phases of human development, or the different varieties of bees. The playful, exploratory arms can come under the control of the brain when it’s time to act—to mate, feed or flee. The head might feel kind of like a preschool teacher on an outing, trying to corral eight wandering children and to get them to their destination. (Imagine if your arms were as contrary as your 2-year-old!)
We grown-up humans may not be so different. Human adults are “neotenous apes,” which means we retain more childhood characteristics than our primate relatives do. We keep our brains in our heads, but neuroscience and everyday experience suggest that we too have divided selves. My grown-up, efficient prefrontal cortex keeps my wandering, exploratory inner child in line. Or tries to, anyway.
There’s only one way to write: Just do it. But there seem to be a million ways not to write. I sit down to work on my column, write a sentence and—ping!—there’s a text with a video of my new baby grandson. One more sentence and I start ruminating about the latest virus variant, triggering a bout of obsessive Covid worry. Cut it out! I tell myself, and write one more sentence, and then I’m staring blankly out the window, my mind wandering: What was it with that weird movie last night? Should I make chicken pilaf or lamb tagine for dinner?
These different kinds of thinking are the subject of a paper I co-authored recently in the journal PNAS, which has an interesting back story. Zachary Irving is a brilliant young philosopher now at the University of Virginia, well-trained—as philosophers have to be—at thinking about thinking. He is especially interested in the kind of unconstrained thought we have when our mind wanders. Is mind-wandering really distinct from other kinds of thought, like simple distraction or obsessive rumination? And why do we do it so much?
Young children daydream a lot, so Zach came to visit my lab at Berkeley, where we study children’s thinking. Neuroscience has mainly focused on goal-directed, task-oriented thinking, but what is your brain doing when your mind wanders? To answer that question, we worked with Julia Kam, now at the University of Calgary, and Robert Knight to design an experiment that involved giving 45 people a tedious but demanding task: pressing an arrow when a cue appeared on the screen. The participants did this more than 800 times for 40 minutes, and at random intervals we asked them to report what they were thinking. Were they thinking about the task or something else? Were they obsessing about a single topic or were their minds freely wandering?
Meanwhile, the participants’ brain waves were being measured with electroencephalography or EEG. The study found that different types of thinking correspond to different brain wave patterns. Like earlier researchers, we found that brain waves are different when you pay attention to a task and when you get distracted. But we found that different types of distraction also have different brain signatures. We compared what happens when your mind is captured by an internal obsession like worrying about Covid, and what happens when it wanders freely.
When your mind wanders there’s a distinctive increase in a particular measure called frontal alpha power, which captures a particular type of wave coming from the frontal lobe of the brain. That’s especially interesting because the same brain waves are associated with creative thinking. People show more frontal alpha power when they are solving a task that requires creativity, and more creative people show more of this kind of activation than less creative ones. One study even showed that stimulating frontal alpha led to better performance on a creativity task.
There was also more variability in those frontal alpha waves when thoughts wandered than when they were focused. The brain patterns went up and down more during those thoughts, just like the thoughts themselves.
We puritanically tend to value task-related thinking above everything else. But these results suggest that simply letting your mind wander, the way kids do, has merits too. My wandering mind made this column harder to write. But maybe it came out better as a result.
What do the haves owe to the have-nots? Should a society redistribute resources from some people to others? These questions are central to the economic policy differences between left and right. The opposing views might seem completely irreconcilable. But a new paper in the journal Cognition suggests that people of all political stripes have surprisingly similar views about redistribution, at least in the abstract.
Daniel Nettle at Newcastle University and Rebecca Saxe at MIT presented 2,400 people in the U.K. with stories about how an imaginary village could divvy up the food people grew in their gardens. A simple graphic allowed the participants to say how much food they thought should go from villagers who had more to those who had less.
The scenario allowed the researchers to systematically vary aspects of the problem. They looked at four factors. How much did the garden yields depend on luck? In some versions of the scenario, the weather had a big influence on how much food each villager produced; in others, luck was less important. How homogenous was the village? In some versions the villagers had very similar “beliefs, customs and appearance,” in others they were “rather different.” Were the villagers at peace with other villages or under attack? And was food abundant or scarce in general?
The participants also reported how far to the right or left their politics were, and across the spectrum the results were very consistent. Everybody thought there should be some redistribution of food—there weren’t any real Scrooges. People of all political persuasions thought there should be more redistribution when luck played a larger role, when the village was more homogenous, when the village was under attack and when resources were abundant. Political views did play a small role in people’s judgments: Those on the right were slightly less likely to redistribute than those on the left. But politics was much less important than the particular story of that village.
You could think of these experiments as an empirical version of the philosopher John Rawls’s famous thought experiment about “the veil of ignorance.” Rawls thought that we could agree in principle about what kind of economic system is fair if we had no idea what our particular role in that system would be—if we didn’t know whether we would be born rich or poor, smart or dull, American or Chinese. The imaginary villages suggest that, at least for a large selection of 21st century Britons, Rawls was right: People do have similar intuitions about what’s fair.
The most interesting point, which has been reflected in other psychological studies as well, is that people’s views on fairness depend more on the factual details of particular situations than on their partisan positions. That’s true even on a topic as obviously political and controversial as redistribution.
Of course, in these studies participants were explicitly told the facts about each village. In real life, it can be hard to know whether someone is lucky or hard-working, or what common features make people part of the same community. And sometimes we might want to argue about these intuitions themselves: Is it really better to redistribute more when resources are more abundant, or when people are more similar? But first, we have to do the hard work to make sure people share the same information and have access to the same truth. Then, at least sometimes, reason and persuasion can prevail.
As we get older we get slower, creakier and stiffer—and a lot happier. This might seem surprising, but it’s one of the most robust results in psychology, and it’s true regardless of income, class or culture. In our 70s and 80s, we are happier than when we were strong and beautiful 20-year-olds.
There are a couple of theories about why this is. We may get better at avoiding stressful situations—we figure out how to dodge that tense work meeting or family squabble. Or there may be something about aging that makes it easier to tolerate stress, even when we can’t avoid it.
The Covid-19 pandemic is a test case for this principle. It’s a terrible threat that is stressful for everyone, but it’s especially dangerous for older people, who are far more likely to die from the disease. Does the association between aging and happiness still hold?
Apparently the answer is yes. According to a new study by Laura Carstensen and colleagues at Stanford University, older people are happier even during the pandemic.
Think back to the first Covid surge in North America last April. The full awfulness of the plague had become apparent, and the uncertainty just made it scarier. We were all anxiously washing our groceries and trying to stay home. That month, the researchers surveyed a representative sample of 974 people from 18 to 74 years old, asking how often and how intensely they had felt 29 different positive and negative emotions in the past week. How often had they been calm or peaceful, concerned or anxious? The participants also reported how much they felt personally at risk from the virus and how risky they thought it was for people in general.
Older people rationally and accurately said that they were more at risk than younger ones. But surprisingly, they also reported experiencing more positive emotions and fewer negative ones than younger people did. Even when the researchers controlled for other factors like income and personality, older people were still happier. In particular, they were more calm, quiet and appreciative, and less concerned and anxious.
The results suggest that older people aren’t happier just because they’re better at avoiding stress—Covid-19 is stressful for everyone. But it’s not so clear just what is responsible. Prof. Carstensen suggests that when there is less time ahead of us, we focus more on the positive parts of the time we have left. As we sometimes sigh when we dodge a conflict, “life is too short”—and it gets shorter as we get older.
Another possibility is that in later life we play a different social role. Humans live much longer than our closest primate relatives: Chimps die when they are around 50, but even in hunter-gatherer cultures humans live into their 70s. Those bonus years are especially puzzling because women, at least, stop reproducing after menopause.
I think those later years may be adapted to allow us to care and teach. Instead of striving to get mates and resources and a place in the pecking order, older people can focus on helping the next generation. We take care of others and pass on our resources, skills and knowledge, instead of working for our own success. As a result, we may be released from the intense emotions and motivations that drive us in our earlier lives. Age grants us an equanimity that even Covid-19 can’t entirely conquer.
To train an artificial intelligence using “machine learning,” you give it a goal, such as getting a high score in a videogame or picking out all the photos in a set that have a cat in them. But you don’t actually tell the AI how to achieve the goal. You just give it lots of examples of success and failure, and it figures out how to solve the problem itself.
But imagine this sorcerers’ apprentice scenario, first proposed by the philosopher Nick Bostrom. One day in the future, someone builds an advanced AI very much smarter than any current system and gives it the goal of making paper-clips. The AI, faithfully following instructions, takes over the world’s machines and starts to demolish everything from pots and pans to cars and skyscrapers. so it can melt down the raw material and turn it into paper clips. The AI is doing what it thinks its creator wanted, but it gets things disastrously wrong.
On social media, we may face a version of this apocalypse already. Instead of maximizing paper clips, Facebook and Twitter maximize clicks, by showing us things that their algorithms think we will be interested in. It seems like an innocent goal, but the problem is that outrage and fear are always more interesting, or at least more clickable, than sober information.
The gap between what we actually want and what an AI thinks we want is called the alignment problem, since we have to align the machine’s function with our own goals. A great deal of research in AI safety and ethics is devoted to trying to solve it. In his fascinating new book “The Alignment Problem,” writer and programmer Brian Christian describes a lot of this research, but he also suggests an interesting and unexpected place to look for solutions: parenting.
After all, parents know a lot about dealing with super-intelligent systems and trying to give them the right values and goals. Often, that means making children’s priorities align with ours, whether that means convincing a toddler to take a nap or teaching a teenager to stay away from drugs. A lot of the work of being a parent, or a caregiver or teacher more generally, is about solving the alignment problem.
But when it comes to children, there’s an added twist. Computer programmers hope to make an AI that will to do exactly what they want. But as parents, we don’t want our children to have exactly the same preferences and accomplishments that we do. We want them to become autonomous, with their own goals and values, which may turn out to be better than our own.
One possible solution to the alignment problem is to design AIs that are more skilled at divining what humans really want, even when we don’t quite know ourselves. This would be a sort of Stepford Wife AI, slavishly devoted to serving us.
But it might be better to think of creating AIs as more like parenting, as the science fiction writer Ted Chiang does in his beautiful story “The Lifecycle of Software Objects.” The story imagines a future where people adopt and train childlike AIs called “digients” as a kind of game. Soon, though, the “parents” come to love the artificial children they care for, and ultimately they face the same dilemmas of independence and care that parents of human children do. If we ever do create truly human-level intelligence in machines, we may need to give them mothers.
Where do good ideas come from? How can we find new solutions to difficult, urgent problems, from the pandemic to the climate crisis? A zillion consulting firms and business books may claim that they know the answer, but there’s been remarkably little empirical data.
Elena Miu and Luke Rendell at the University of St. Andrews, like many other biologists, argue that “cultural evolution” is one of the secrets. Human beings gradually accumulate new ideas and solutions. New technologies, from stone axes to smartphones, almost always come from the interaction of many individual problem-solvers rather than a lone genius. But how could such a complicated process be studied?
In two studies, reported in the journals Nature Communications in 2018 and Science Advances in 2020, Prof. Miu and colleagues cleverly took advantage of data from coding competitions. From 1998 to 2012, Mathworks Software held a series of 19 public competitions to find the best coding solutions to computer-science problems. Nearly 2,000 participants submitted more than 45,000 entries. There was no single correct solution to the problems; instead, the contestants tried to produce code that would be simpler and work better, with judges assigning a score to each entry.
All the solutions and scores were open for public viewing, so new contestants could see how earlier ones approached the problems. Researchers were able to measure how similar each new solution was to earlier attempts, allowing them to witness cultural evolution in action.
One of the central challenges of cultural evolution is how to balance imitation and innovation. Imitation lets us take advantage of all the ideas that our ancestors have discovered before us. But innovation is also crucial, since if everybody just copied everybody else, we’d never make any progress.
Prof. Miu and her colleagues found that the coding contestants fell into three groups. There were “copiers” who consistently imitated the successful solutions, making only the smallest changes. There were also “mavericks” who didn’t copy the entries that were already out there but tried something new, more like the stereotypical lone genius. And then there were “pragmatists” who flexibly switched back and forth between copying and innovating.
The researchers found that pragmatists were by far the most likely to receive high scores. They built on the work that had already been done, but unlike the simple copiers, they substantially altered and improved the code, too.
The researchers found similar trade-offs when they focused on individual entries instead of on contestants, who could submit more than one solution to a problem. About 75% of entries were “tweaks,” making small changes to solutions that other people had already suggested. But there were also “leaps,” solutions that were very different from the ones already out there.
Overall, the leaps were much less likely to be successful than the tweaks; most of them went nowhere. But when leaps were successful, they led to much better solutions and opened up whole new sets of ideas. In fact, there was a consistent pattern: Someone would introduce a fabulous new leap and then the next generation of contestants would refine it with tweaks.
What can these studies tell us about how to solve real-life problems? Diversity is key. Rather than the lone genius, it’s the combination of different kinds of knowledge and temperament, humble tweaks and bold leaps, that produces new solutions.
What makes a good life? Philosophers have offered two classic answers to the question, captured by different Greek words for happiness, hedonia and eudaimonia. A hedonic life is free from pain and full of everyday pleasure—calm, safe and serene. A eudaemonic life is a virtuous and purposeful one, full of meaning.
But in a new study, philosopher Lorraine Besser of Middlebury College and psychologist Shigehiro Oishi of the University of Virginia argue that there is a third important element of a good life, which they call “psychological richness.” And they show that ordinary people around the world think so, too.
According to this view, a good life is one that is interesting, varied and surprising—even if some of those surprises aren’t necessarily pleasant ones. In fact, the things that make a life psychologically rich may actually make it less happy in the ordinary sense.
After all, to put it bluntly, a happy life can also be boring. Adventures, explorations and crises may be painful, but at least they’re interesting. A psychologically rich life may be less eudaemonic, too. Those unexpected turns may lead you to stray from your original purpose and act in ways that are less than virtuous.
Profs. Besser and Oishi make the case for a psychologically rich life in a paper that has just appeared in the journal Philosophical Psychology. But is this a life that most people would actually want, or is it just for the sort of people who write philosophy articles?
To find out, the authors and their colleagues did an extensive study involving more than 3,000 people in nine countries, recently published in the Journal of Affective Science. The researchers gave participants a list of 15 descriptive words such as “pleasant,” “meaningful” and “interesting,” and asked which best described a good life.
When they analyzed the responses, Profs. Besser and Oishi found that people do indeed think that a happy and meaningful life is a good life. But they also think a psychologically rich life is important. In fact, across different cultures, about 10-15% of people said that if they were forced to choose, they would go for a psychologically rich life over a happy or meaningful one.
In a second experiment the researchers posed the question a different way. Instead of asking people what kind of life they would choose, they asked what people regretted about the life they had actually led. Did they regret decisions that made their lives less happy or less meaningful? Or did they regret passing up a chance for interesting and surprising experiences? If they could undo one decision, what would it be? When people thought about their regrets they were even more likely to value psychological richness—about 30% of people, for example, in both the U.S. and South Korea.
The desire for a psychologically rich life may go beyond just avoiding boredom. After all, the unexpected, even the tragic, can have a transformative power that goes beyond the hedonic or eudaemonic. As a great Leonard Cohen song says, it’s the cracks that let the light come in.
It might seem obvious that you need a brain to be intelligent, but a new area of research called “basal cognition” explores whether there are kinds of intelligence that don’t require neurons and synapses. Some of the research was reported in a special issue of the Philosophical Transactions of the Royal Society last year. These studies may help to answer deep questions about the nature and evolution of intelligence, but the experiments are also just plain fascinating, with truly weird creatures and even weirder results.
Slime molds, for example, are very large single-celled organisms that can agglomerate into masses, creeping across the forest floor and feeding on decaying plants. (One type is called dog vomit slime mold, which gives you an idea of what they look like.) They can also retreat into a sort of freeze-dried capsule form, losing much of their protein and DNA in the process, and stay that way for months. But just add water and the reconstituted slime mold is good as new.
They are also fussy eaters. If you put them down on top of their favorite meal of agar and Quaker oats and add salt or quinine to one part of it, they’ll avoid that part, at least at first. The biologists Aurele Bousard and Audrey Dussutour at the University of Toulouse and colleagues used this fact to show that slime molds can learn in a simple way called habituation. If the only way to get the oats is to eat the salt too, the molds eventually get used to it and stop objecting. Remarkably, this information somehow persists for up to a month, even through their period of dessicated hibernation.
Flatworms are equally weird. Cut one into a hundred pieces and each piece will regenerate into a perfect new worm. (A slime mold-flatworm alliance against the humans would make a great horror movie). But how do the cells in the severed flatworm fragment know how to grow into a head and a tail?
Santosh Manicka and Michael Levin of Tufts University argue in the special issue that regeneration involves a kind of cognition. The process is remarkably robust: You can move the cells that usually make a head to the tail location, and they will somehow figure out how to make a tail instead. The researchers argue that this ability to take multiple paths to achieve the same goal requires a kind of intelligence.
Regeneration involves the standard mechanisms that allow the DNA in a cell to manufacture proteins. But Dr. Levin and his colleagues have shown that flatworm cells also communicate information through electricity, signaling to other nearby cells in much the way that neurons do. In experiments that would make Dr. Frankenstein proud, the researchers altered those electrical signals to produce a worm that consistently regenerates with two heads, or even one that grows the head of another related species of flatworm.
This research has some practical implications: It would be great if human accident victims could grow back their limbs as easily as flatworms do. But the studies also speak to a profound biological and philosophical conundrum. Where do cognition and intelligence come from? How could natural selection turn single-celled amoebas into homo sapiens? Dr. Levin thinks that the electrical communications that help flatworms regenerate might have evolved into the subtler mechanisms of brain communication. Those creepy slime molds and flatworms might help to explain how humans got smart.
Like children, older people need special care. The current crisis has made this vivid. Millions of people have transformed their lives—staying indoors, wearing masks, practicing social distancing—to protect their vulnerable parents and grandparents, as well as other elders they may never even see.
But this raises a puzzling scientific paradox. We know that human beings are shaped by the forces of evolution and natural selection. So why did we evolve to be vulnerable for such a long stretch of our lives? And why do strong, able humans in the prime of life put so much time and energy into caring for those who are no longer so productive? Chimpanzees rarely live past 50 and there is no chimp equivalent of menopause. But even in hunter-gatherer cultures without modern medicine, if you make it past childhood you may well live into your 70s. Human old age, cognition and culture evolved together.
A new special issue of the Philosophical Transactions of the Royal Society devoted to “Life History and Learning,” which I coedited, brings together psychologists, anthropologists and evolutionary biologists to try to answer these questions.
Humans have always been “extractive foragers,” using complicated techniques like hunting and fishing that let us find extra calories in almost any environment. Our big brains make this possible, but we need culture and teaching to allow us to develop complex skills over many generations.
In the special issue, Michael Gurven of the University of California at Santa Barbara and colleagues argue that older people may have a special place in that process. Many foraging skills require years of practice: Hunters don’t reach their peak until they are in their 30s.
But it’s hard to practice a skill and teach it to someone else at the same time. (Sunday pancakes take twice as long when the kids help.) Prof. Gurven and his team found that, mathematically, the best evolutionary strategy for developing many complex skills was to have the old teach the young. That way the peak, prime-of-life performers can concentrate on getting things done, while young learners are matched with older, more knowledgeable but less productive teachers.
The researchers analyzed more than 20,000 observations collected from 40 different locations, and found this pattern in many different hunting and gathering cultures. Children were most likely to learn either from other, older children or from elders. The grandparents weren’t as strong or effective providers as the 30-year-olds, but they were most likely to be teachers.
This may explain why humans evolved to have a long old age: The advantages of teaching selected for those extra years of human life. From an evolutionary perspective, caring for vulnerable humans at either end of life lets all humans flourish.
The pandemic has made us realize both the importance and the difficulty of this kind of care. In the richest society in history, the job of caring for the old and the young involves little money and less status. Elders are often isolated. Perhaps after the pandemic we will appreciate better the profound connection between brilliant, fragile young learners and wise, vulnerable old teachers, and bring the grandchildren and grandparents back together again.
The last few weeks have seen extraordinary displays of altruism. Ordinary people have transformed their lives—partly to protect themselves and the people they love from the Covid-19 pandemic, but also to help other people they don’t even know. But where does altruism come from? How could evolution by natural selection produce creatures who sacrifice themselves for others?
In her 2019 book “Conscience,” the philosopher Patricia Churchland argues that altruism has its roots in our mammalian ancestry. The primordial example of an altruistic emotion is the love that mothers feel toward their babies. Helpless baby mammals require special care from their nursing mothers, and emotional attachment guarantees this care.
Those emotions and motivations are associated with a distinctive pattern of brain activations, hormones and genes. Twenty years ago, neuroscientists discovered that the same brain mechanisms that accompany mother love also operate when mammals care about their mates. Only about 5% of mammal species are “pair-bonded,” with mates who act like partners—seeking out each other’s company, raising babies together and actively helping each other. It turns out that those pair-bonded species, like prairie voles, have co-opted the biology that underpins mother love.
In turn, Prof. Churchland argues, those biological mechanisms could underpin broader altruistic cooperation in species with larger interdependent social groups, like wolves and monkeys. When the members of a species have to hunt, forage or—most important—raise their young together, they start caring about each other too. In humans, who have an exceptionally long childhood, that love and care extended not only to mates but to “alloparents”—unrelated people who help take care of children.
In a 2017 paper in the journal Cognition, researchers Rachel Magid and Laura Schulz of M.I.T. showed that when we care for another person in this altruistic way, we extend our own needs to include theirs, in a process they describe as ”moral alchemy.” A caregiver—whether human or vole, mother or father or grandparent or alloparent—doesn’t just do what’s good for the baby because of an abstract obligation or an implicit contract. It’s because the baby’s needs have become as important to them as their own.
Our long, helpless childhood gives humans a great advantage in return: It gives us to time to learn, imagine and invent. We combine these intellectual abilities with the primordial emotions of care. We can imagine new technologies and use them for altruistic purposes, from medical discoveries about how to fight viruses to Zoom sessions that let me tell my grandchildren I love them.
But the largest and most profound imaginative human leap comes when we take those altruistic emotions and apply them beyond the family and village, to strangers, foreigners and the world at large. Prof. Churchland argues that we don’t begin with universally applicable rational principles—Kant’s categorical imperative or the greatest good for the greatest number—and then apply them to particular cases. Instead, we begin with the close and the personal and expand those attachments to a wider circle.
In evolutionary history, the extension of altruism from mothers and babies to larger groups allowed humans to cooperate and thrive. Now our lives are entangled with those of everyone else on the planet, and our survival depends on widening the circle even more.
A few weeks ago, I took part in a free-wheeling annual gathering of social scientists from the academic and tech worlds. The psychologists and political scientists, data analysts and sociologists at Social Science Foo Camp, held in Menlo Park, Calif., were preoccupied with one problem in particular: With an election looming, what can we do about the spread of misinformation and fake news, especially on social media?
Fact-checking all the billions of stories on social media is obviously impractical. It may not be effective either. Earlier studies have shown an ”illusory truth” effect: Repeating a story, even if you say that it’s false, may actually make people more likely to remember it as true. Maybe, in our highly polarized world, they can’t even tell the difference; all that matters is whether the story supports your politics.
But new research contradicts this pessimistic picture. David Rand of MIT and Gordon Pennycook of the University of Regina have suggested that “cognitive laziness” may be a bigger problem than bias. It’s not that people can’t tell or don’t care whether a story is true; it’s just that they don’t put in the effort to find out.
A new study in the Journal of Experimental Psychology by Profs. Rand and Pennycook, with Bence Bago of the University of Toulouse, shows that if you give people time to think, they do better at judging whether news stories on social media are true or false.
The researchers showed more than 1,000 people examples of true and false headlines that had actually appeared online—real fake news, as it were. Some headlines were slanted toward Republicans, like “Obama was going to Castro’s funeral until Trump told him this,” while others were slanted toward Democrats, like “Gorsuch started ‘fascism forever’ club at elite prep school.”
The researchers asked participants to judge whether the headlines were accurate. One group was allowed to take as much time as they wanted to make a judgment, while another group had to decide in seven seconds, while they were also trying to remember a pattern of dots shown on the screen. Then they had a chance to think it over and try again. The participants also filled out a questionnaire about their political views.
As you might expect, people were somewhat more likely to believe fake news that fit their ideological leanings. But regardless of their politics, people were more likely to spot the difference between real and fake news when they had time to think than when they had to decide quickly.
Of course, when we browse Twitter or Facebook, we are more likely to be rushed and distracted than patiently reflective. Lots of items are pouring quickly through our feeds, and nobody is asking us to pause and think about whether those items are accurate.
But it would be relatively easy for the platforms to slow us down a little and make us more thoughtful. For example, you could simply ask people to rate how accurate a story is before they share it. In preliminary, still unpublished work, Profs. Rand and Pennycook found that asking people to judge the accuracy of one story on Twitter made them less likely to share others that were inaccurate.
Cognitive science tells us that people are stupider than we think in some ways and smarter in others. The challenge is to design media that support our cognitive strengths instead of exploiting our weaknesses.
Like many people with children or grandchildren, I spent December watching the new Star Wars TV series “The Mandalorian.” Across America, the show led to a remarkable Christmas truce among bitterly competing factions. Rural or urban, Democrat or Republican, we all love Baby Yoda.
In case you spent the last month in a monastic retreat, Baby Yoda is the weird but irresistibly adorable creature who is the heart of the series. (He isn’t actually Yoda but a baby of the same species.) The Mandalorian, a ferocious bounty-hunter in a metal helmet, takes on the job of hunting down Baby Yoda but ends up rescuing and caring for him instead. This means finding snacks and sitters and keeping the baby from playing with the knob on the starship gear shift.
Why do the Mandalorian and the whole internet love Baby Yoda so much? The answer may tell us something profound about human evolution.
Humans have a particularly long and helpless infancy. Our babies depend on older caregivers for twice as long as chimp babies do. As a result, we need more varied caregiving. Chimp mothers look after their babies by themselves, but as the great anthropologist Sarah Hrdy pointed out in her 2009 book “Mothers and Others,” human mothers have always been assisted by fathers, grandparents and “alloparents”—people who look after other folks’ children. No other animal has so many different kinds of caregivers.
Those caregivers are what anthropologists call “facultative,” meaning that they only provide care in certain circumstances and not others. Once they are committed to a baby, however, they may be just as devoted and effective as biological mothers. The key factor seems to be the very act of caregiving itself. We don’t take care of babies because we love them; instead, like the Mandalorian, we love babies once we start taking care of them.
In a new paper forthcoming in the Philosophical Transactions of the Royal Society, Dr. Hrdy and Judith Burkart argue that this led to the evolution of special social adaptations in human babies, since they have to actively persuade all those facultative caregivers to love them. Studies show that babies have physical features that automatically attract care—those adorable, “awww”-inducing big eyes and heads and fat cheeks and little noses, all of which are exaggerated in Baby Yoda. Drs. Hrdy and Burkart think that fat cheeks may be particularly important: A baby’s plumpness may be a signal that it’s especially worth investing in.
The way a baby acts is just as important as the way it looks. Even though babies can’t talk, they gesture and make eye contact. Studies show that human infants already understand and react to the emotions and desires of others. Drs. Hrdy and Burkart argue that these very early abilities for social cooperation and emotional intelligence evolved to help attract caregivers.
They also suggest that once these abilities were in place in babies, they allowed more cooperation between adults as well. All those mothers and fathers and alloparents had to coordinate their efforts to take care of the babies. So there was a kind of benign evolutionary circle: As babies became more socially skilled, they were better at attracting caregivers, and when they grew up they became better caregivers themselves.
So the story arc of “The Mandalorian” is also the story of human evolution. He rescues Baby Yoda, but Baby Yoda also rescues him. For adults, taking care of adorable babies together lets us escape from isolation and conflict so we can care for each other, too.
Ever since the Greeks, people have been complaining that the next generation is a disappointment. Nowadays, it’s Boomers fighting with those aggravating, avocado-toast crunching, emoji-texting millennials. The feeling is seductive—but isn’t it really an illusion? After all, the old folks who are complaining were once on the receiving end of the same complaints themselves. In the 1960s, the Boomers’ parents denounced them as irresponsible hippies. Have people really been steadily deteriorating since ancient times?
In a new paper in the journal Science Advances, John Protzko and Jonathan Schooler of the University of California at Santa Barbara call this feeling the “kids these days” effect. And their research suggests that it has as much to do with how we think about ourselves as it does with those darned kids.
The researchers studied a sample of 1,824 people, chosen to be representative of the U.S. population. They asked the participants about how the next generation compared with earlier ones—in particular, whether they were respectful, intelligent and well-read. Overall, people gave the young lower ratings, in keeping with the “kids these days” effect.
But the interesting thing was that people responded differently depending on what they were like themselves. People who cared most about respect were most likely to say that the next generation was disrespectful. Those who scored highest on an IQ test were most likely to say that the next generation was less intelligent. And those who did best on an author-recognition test were most likely to say that the next generation didn’t like reading. It seems that older people weren’t responding to objective facts about the young; instead, they were making subjective comparisons in which they themselves came off best.
Most significantly, Dr. Protzko and Dr. Schooler showed that when people’s view of themselves changed, so did their view of the next generation. In one part of the experiment, researchers told participants that they had either scored very well or very badly on the author-recognition test and then asked them to make judgments about the reading abilities of the young. When people believed that they were worse readers themselves, they also were less likely to think badly of the next generation.
Dr. Protzko and Dr. Schooler think the “kids these days” illusion works like this. Older people who excel in a particular trait look at younger people and see that, on average, they are less well-read, respectful or intelligent than they are themselves. Then they compare those young people to their own memories of what they were like at the same age.
But those memories are unreliable. Studies by the Stanford psychologist Lee Ross have shown that we tend to adjust our view of our past selves to match the present. For example, we tend to think that our past political views are much closer to the ones we hold now than they actually were.
In addition to overestimating how much the past resembled the present, people who excel in a particular trait forget that they aren’t typical of their own generation. They may generalize the statement “I loved to read when I was young” to conclude “and everybody else did too.” When we complain about the next generation, we’re actually comparing them to an idealized version of our own past, obscured by the flattering fog of memory.
Today’s children and teenagers seem to be taking fewer risks. The trend has had some good effects, like decreases in teenage pregnancy, drug use and even accidents. On the other hand, there has been an equally dramatic increase in anxiety in children and teenagers.
If life is less risky, why are young people more fearful? A new study in the journal Nature Human Behavior, by Nim Tottenham at Columbia University, Regina Sullivan at New York University and their colleagues, suggests an answer. Young people are designed to take risks and avoiding them too much may lead to anxiety. But productive risk-taking depends on having a sense of safety—knowing that a parent is there in the background to take care of you.
The study takes off from one of the oldest results in psychology. Put a rat in a maze where, if it goes down a certain path, it receives an electric shock. The next time it’s put in the maze, the rat will avoid the path that led to the shock. This kind of “avoidance learning” is fundamental, but it has an important drawback: If the rat always avoids the risky path, he will never learn whether the risk is still there or how to cope with it.
Scientists think that avoidance learning may be part of the mechanism behind the development of anxiety, phobias and PTSD. One rocky flight can make you terrified to get on a plane, and so can keep you from learning that most flights are just fine. Counter-intuitively, the best cure for a phobia is to gradually expose the patient to the scary stimulus until their brain is convinced that no harm will actually result. (A psychologist friend of mine cured his snake phobia by raising a snake in his home.)
The classic maze studies were done with adult rats. But in a 2006 study, Prof. Sullivan and her colleagues found that young rats—the equivalent of human children and teenagers—react very differently. Remarkably, they actually preferred the path that led to the shock, choosing a risky but informative experience over a safe and boring one.
But the young rats only took the risk if their mother was present. It’s as if their mother’s presence was a cue that nothing really terrible would happen, allowing the young rats to confidently explore and learn about their environment.
In the new study, Prof. Tottenham and Prof. Sullivan found the same result with a group of 106 preschool children. The children were shown two shapes, one of which was accompanied by a loud, unpleasant noise. Sometimes the child’s parent was present during this part of the experiment, and sometimes they weren’t.
Then the children were invited to crawl through one of two tunnels to get a prize—one marked with the aversive shape and one with the innocuous one. When the parent had been present during their introduction to the shapes, the young children preferred to explore the tunnel marked with the shape that had led to the unpleasant noise. But when the parents had been absent, the children preferred the innocuous shape.
More than 50 years ago, the psychologist John Bowlby suggested that the secure base of “attachment”—the unconditional love that links parents and children—is what allows children to explore the world, and these experiments suggest that he was right. Keeping children from ever taking risks or experiencing their consequences may be counterproductive. But a sense of parental care and stability appears to be just what’s needed for children to take risks productively and learn something new.
Teenagers are paradoxical. That’s a mild and detached way of saying something that parents often express with considerably stronger language. But the paradox is scientific as well as personal. In adolescence, helpless and dependent children who have relied on grown-ups for just about everything become independent people who can take care of themselves and help each other. At the same time, once cheerful and compliant children become rebellious teenage risk-takers, often to the point of self-destruction. Accidental deaths go up dramatically in adolescence.
A new study published in the journal Child Development, by Eveline Crone of the University of Leiden and colleagues, suggests that the positive and negative sides of teenagers go hand in hand. The study is part of a new wave of thinking about adolescence. For a long time, scientists and policy makers concentrated on the idea that teenagers were a problem that needed to be solved. The new work emphasizes that adolescence is a time of opportunity as well as risk.
The researchers studied “prosocial” and rebellious traits in more than 200 children and young adults, ranging from 11 to 28 years old. The participants filled out questionnaires about how often they did things that were altruistic and positive, like sacrificing their own interests to help a friend, or rebellious and negative, like getting drunk or staying out late.
Other studies have shown that rebellious behavior increases as you become a teenager and then fades away as you grow older. But the new study shows that, interestingly, the same pattern holds for prosocial behavior. Teenagers were more likely than younger children or adults to report that they did things like unselfishly help a friend.
Most significantly, there was a positive correlation between prosociality and rebelliousness. The teenagers who were more rebellious were also more likely to help others. The good and bad sides of adolescence seem to develop together.
Is there some common factor that underlies these apparently contradictory developments? One idea is that teenage behavior is related to what researchers call “reward sensitivity.” Decision-making always involves balancing rewards and risks, benefits and costs. “Reward sensitivity” measures how much reward it takes to outweigh risk.
Teenagers are particularly sensitive to social rewards—winning the game, impressing a new friend, getting that boy to notice you. Reward sensitivity, like prosocial behavior and risk-taking, seems to go up in adolescence and then down again as we age. Somehow, when you hit 30, the chance that something exciting and new will happen at that party just doesn’t seem to outweigh the effort of getting up off the couch.
The study participants filled out a separate “fun-seeking” questionnaire that measured reward sensitivity with statements like “I’m always willing to try something new if I think it will be fun.” This scale correlated with both prosociality and rebelliousness. What’s more, the researchers were able to track the responses of participants over a four-year period and found that those who had been most eager for experience when they were younger became the most rebellious teenagers—but also the most altruistic.
This new research suggests that Cyndi Lauper was right: Girls (and boys) just wanna have fun, and that’s what makes them into paradoxically good and bad, rebellious and responsible teenagers.
Do our culture and language shape the way we think? A new paper in the Proceedings of the National Academy of Sciences, by Caren Walker at the University of California at San Diego, Alex Carstensen at Stanford and their colleagues, tried to answer this ancient question. The researchers discovered that very young Chinese and American toddlers start out thinking about the world in similar ways. But by the time they are 3 years old, they are already showing differences based on their cultures.
Dr. Walker’s research took off from earlier work that she and I did together at the University of California at Berkeley. We wanted to know whether children could understand abstract relationships such as “same” and “different.” We showed children of various ages a machine that lights up when you put a block of a certain color and shape on it. Even toddlers can easily figure out that a green block makes the machine go while a blue block doesn’t.
But what if the children saw that two objects that were the same—say, two red square blocks—made the machine light up, while two objects that were different didn’t? We showed children this pattern and asked them to make the machine light up, giving them a choice between a tray with two new identical objects—say, two blue round blocks—or another tray with two different objects.
At 18 months old, toddlers had no trouble figuring out that the relationship between the blocks was the important thing: They put the two similar objects on the machine. But much to our surprise, older children did worse. Three-year-olds had a hard time figuring out that the relationship between the blocks was what mattered. The 3-year-olds had actually learned that the individual objects were more important than the relationships between them.
But these were all American children. Dr. Walker and her colleagues repeated the machine experiment with children in China and found a different result. The Chinese toddlers, like the toddlers in the U.S., were really good at learning the relationships; but so were the 3-year-olds. Unlike the American children, they hadn’t developed a bias toward objects.
In fact, when they saw an ambiguous pattern, which could either be due to something about the individual objects or something about the relationships between them, the Chinese preschoolers actually preferred to focus on the relationships. The American children focused on the objects.
The toddlers in both cultures seemed to be equally open to different ways of thinking. But by age 3, something about their everyday experiences had already pushed them to focus on different aspects of the world. Language could be part of the answer: English emphasizes nouns much more than Chinese does, which might affect the way speakers of each language think about objects.
Of course, individuals and relationships are both important in the social and physical worlds. And cultural conditioning isn’t absolute: American adults can reason about relationships, just as Chinese adults can reason about objects. But the differences in focus and attention, in what seems obvious and what seems unusual, may play out in all sorts of subtle differences in the way we think, reason and act. And those differences may start to emerge when we are very young children.
Where does consciousness come from? When and how did it evolve? The one person I’m sure is conscious is myself, of course, and I’m willing to believe that my fellow human beings, and familiar animals like cats and dogs, are conscious too. But what about bumblebees and worms? Or clams, oak trees and rocks?
Some philosophers identify consciousness with the complex, reflective, self-conscious experiences that we have when, say, we are sitting in an armchair and thinking about consciousness. As a result, they argue that even babies and animals aren’t really conscious. At the other end of the spectrum, some philosophers have argued for “pan-psychism,” the idea that consciousness is everywhere, even in atoms.
Recently, however, a number of biologists and philosophers have argued that consciousness was born from a specific event in our evolutionary history: the Cambrian explosion. A new book, “The Evolution of the Sensitive Soul” by the Israeli biologists Simona Ginsburg and Eva Jablonka, makes an extended case for this idea.
For around 100 million years, from about 635 to 542 million years ago, the first large multicellular organisms emerged on Earth. Biologists call this period the Ediacaran Garden—a time when, around the globe, a rich variety of strange creatures spent their lives attached to the ocean floor, where they fed, reproduced and died without doing very much in between. There were a few tiny slugs and worms toward the end of this period, but most of the creatures, such as the flat, frond-like, quilted Dickinsonia, were unlike any plants or animals living today.
Then, quite suddenly by geological standards, most of these creatures disappeared. Between 530 and 520 million years ago, they were replaced by a remarkable proliferation of animals who lived quite differently. These animals started to move, to have brains and eyes, to seek out prey and avoid predators. Some of the creatures in the fossil record seem fantastic—like Anomolocaris, a three-foot-long insectlike predator, and Opabinia, with its five eyes and trunk-like proboscis ending in a grasping claw. But they included the ancestors of all current species of animals, from insects, crustaceans and mollusks to the earliest vertebrates, the creatures who eventually turned into us.
But other children saw the opposite pattern—just one example of the color rule, followed by four examples of the shape rule. Those children rationally switched to the shape rule: If the plaque showed a red square, they would choose a blue square rather than a red circle to make the machine go. In other words, the children acted like good scientists. If there was more evidence for their current belief they held on to it, but if there was more evidence against it then they switched.
Of course, the children in this study had some advantages that adults don’t. They could see the evidence with their own eyes and they trusted the experimenter. Most of our scientific evidence about how the world works—evidence about climate change or vaccinations, for example—comes to us in a much more roundabout way and depends on a long chain of testimony and trust.
Applying our natural scientific reasoning abilities in these contexts is more challenging, but there are hopeful signs. A new paper in PNAS by Gordon Pennycook and David Rand at Yale shows that ordinary people are surprisingly good at rating how trustworthy sources of information are, regardless of ideology. The authors suggest that social media algorithms could incorporate these trust ratings, which would help us to use our inborn rationality to make better decisions in a complicated world.
How do psychedelic drugs work? And can psychedelic experiences teach you something? People often say that these experiences are important, revelatory, life-changing. But how exactly does adding a chemical to your brain affect your mind?
The renaissance of scientific psychedelic research may help to answer these questions. A new study in the journal Nature by Gul Dolen at Johns Hopkins University and her colleagues explored how MDMA works in mice. MDMA, also known as Ecstasy, is the illegal and sometimes very dangerous “love drug” that fueled raves in the 1980s and is still around today. Recent research, though, suggests that MDMA may be effective in treating PTSD and anxiety, and the FDA has approved further studies to explore these possibilities. The new study shows exactly how MDMA influences the brain, at least in mice: It restores early openness to experience, especially social experience, and so makes it easier for the mice to learn from new social information.
In both mice and humans, different parts of the brain are open to different kinds of information at different times. Neuroscientists talk about “plasticity”—the ability of the brain to change as a result of new experiences. Our brains are more plastic in childhood and then become more rigid as we age. In humans, the visual system has a “sensitive period” in the first few years when it can be rewired by experience—that is why it’s so important to correct babies’ vision problems. There is a different sensitive period for language: Language learning gets noticeably harder at puberty.
Similarly, Dr. Dolen found that there was a sensitive period for social learning in mice. The mice spent time with other mice in one colored cage and spent time alone in a different-colored cage. The young mice would learn to move toward the color that was associated with the social experience, and this learning reached a peak in adolescence (“Hey, let’s hit the red club where the cool kids hang out!”). Normally, the adult mice stopped learning this connection (“Who cares? I’d rather just stay home and watch TV”).
The researchers showed that this was because the younger mice had more plasticity in the nucleus accumbens, a part of the brain that is involved in motivation and learning. But after a dose of MDMA, the adult mice were able to learn once more, and they continued to learn the link for several weeks afterward.
Unlike other psychedelics, MDMA makes people feel especially close to the other people around them. (Ravers make “cuddle puddles,” a whole group of people locked in a collective embrace.) The new study suggests that this has something to do with the particular chemical profile of the drug. The social plasticity effect depended on a combination of two different “neurotransmitters”: serotonin, which seems to be involved in plasticity and psychedelic effects in general, and oxytocin, the “tend and befriend” chemical that is particularly involved in social closeness and trust. So MDMA seems to work by making the brain more open in general but especially more open to being with and learning about others.
This study, like an increasing number of others, suggests that psychedelic chemicals can make the brain more open to learning and change. What gets learned or changed, though, depends on the particular chemical, the particular input that reaches the brain, and the particular information that reaches the mind.
A psychedelic experience might be just entertaining, or even terrifying or destructive—certainly not something for casual experimentation. But in the right therapeutic setting, it might actually be revelatory.
Everybody’s talking about artificial intelligence. Some people even argue that AI will lead, quite literally, to either immortality or the end of the world. Neither of those possibilities seems terribly likely, at least in the near future. But there is still a remarkable amount of debate about just what AI can do and what it means for all of us human intelligences.
A new book called “Possible Minds: 25 Ways of Looking at AI,” edited by John Brockman, includes a range of big-picture essays about what AI can do and what it might mean for the future. The authors include people who are working in the trenches of computer science, like Anca Dragan, who designs new kinds of AI-directed robots, and Rodney Brooks, who invented the Roomba, a robot vacuum cleaner. But it also includes philosophers like Daniel Dennett, psychologists like Steven Pinker and even art experts like the famous curator Hans Ulrich Obrist.
I wrote a chapter about why AI still can’t solve problems that every 4-year-old can easily master. Although Deep Mind’s Alpha Zero can beat a grand master at computer chess, it would still bomb at Attie Chess—the version of the game played by my 3-year-old grandson Atticus. In Attie Chess, you throw all of the pieces into the wastebasket, pick each one up, try to put them on the board and then throw them all in the wastebasket again. This apparently simple physical task is remarkably challenging even for the most sophisticated robots.
But reading through all the chapters, I began to sense that there’s a more profound way in which human intelligence is different from artificial intelligence, and there’s another reason why Attie Chess may be important.
The trick behind the recent advances in AI is that a human specifies a particular objective for the machine. It might be winning a chess game or distinguishing between pictures of cats and dogs on the internet. But it might also be something more important, like judging whether a work of art deserves to be in a museum or a defendant deserves to be in prison.
The basic technique is to give the computer millions of examples of games, images or previous judgments and to provide feedback. Which moves led to a high score? Which pictures did people label as dogs? What did the curators or judges decide in particular cases? The computer can then use machine learning techniques to try to figure out how to achieve the same objectives. In fact, machines have gotten better and better at learning how to win games or match human judgments. They often detect subtle statistical cues in the data that humans can’t even understand.
But people also can decide to change their objectives. A great judge can argue that slavery should be outlawed or that homosexuality should no longer be illegal. A great curator can make the case for an unprecedented new kind of art, like Cubism or Abstract Expressionism, that is very different from anything in the past. We invent brand new games and play them in new ways. In fact, when children play, they practice setting themselves new objectives, even when, as in Attie Chess, those goals look pretty silly from the adult perspective.
Indeed, the point of each new generation is to create new objectives—new games, new categories and new judgments. And yet, somehow, in a way that we don’t understand at all, we don’t merely slide into relativism. We can decide what is worth doing in a way that AI can’t.
Any new technology, from fire to Facebook, from internal combustion to the internet, brings unforeseen dangers and unintended consequences. Regulating and controlling those technologies is one of the great tasks of each generation, and there are no guarantees of success. In that regard, we have more to fear from natural human stupidity than artificial intelligence. But, so far at least, we are the only creatures who can decide not only what we want but whether we should want it.
We all know that it’s hard to get people to change their minds, even when they should. Studies show that when people see evidence that goes against their deeply ingrained beliefs, they often just dig in more firmly: Climate change deniers and anti-vaxxers are good examples. But why? Are we just naturally resistant to new facts? Or are our rational abilities distorted by biases and prejudices?
Of course, sometimes it can be perfectly rational to resist changing your beliefs. It all depends on how much evidence you have for those beliefs in the first place, and how strongly you believe them. You shouldn’t overturn the periodic table every time a high-school student blunders in a chemistry lab and produces a weird result. In statistics, new methods increasingly give us a precise way of calculating the balance between old beliefs and new evidence.
Over the past 15 years, my lab and others have shown that, to a surprising extent, even very young children reason in this way. The conventional wisdom is that young children are irrational. They might stubbornly cling to their beliefs, no matter how much evidence they get to the contrary, or they might be irrationally prone to change their minds—flitting from one idea to the next regardless of the facts.
In a new study in the journal Child Development, my student Katie Kimura and I tested whether, on the contrary, children can actually change their beliefs rationally. We showed 4-year-old children a group of machines that lit up when you put blocks on them. Each machine had a plaque on the front with a colored shape on it. First, children saw that the machine would light up if you put a block on it that was the same color as the plaque, no matter what shape it was. A red block would activate a red machine, a blue block would make a blue machine go and so on. Children were actually quite good at learning this color rule: If you showed them a new yellow machine, they would choose a yellow block to make it go.
But then, without telling the children, we changed the rule so that the shape rather than the color made the machine go. Some children saw the machine work on the color rule four times and then saw one example of the shape rule. They held on to their first belief and stubbornly continued to pick the block with the same color as the machine.
But other children saw the opposite pattern—just one example of the color rule, followed by four examples of the shape rule. Those children rationally switched to the shape rule: If the plaque showed a red square, they would choose a blue square rather than a red circle to make the machine go. In other words, the children acted like good scientists. If there was more evidence for their current belief they held on to it, but if there was more evidence against it then they switched.
Of course, the children in this study had some advantages that adults don’t. They could see the evidence with their own eyes and they trusted the experimenter. Most of our scientific evidence about how the world works—evidence about climate change or vaccinations, for example—comes to us in a much more roundabout way and depends on a long chain of testimony and trust.
Applying our natural scientific reasoning abilities in these contexts is more challenging, but there are hopeful signs. A new paper in PNAS by Gordon Pennycook and David Rand at Yale shows that ordinary people are surprisingly good at rating how trustworthy sources of information are, regardless of ideology. The authors suggest that social media algorithms could incorporate these trust ratings, which would help us to use our inborn rationality to make better decisions in a complicated world.
Over the holidays, my family confronted a profound generational divide. My grandchildren became obsessed with director Robert Zemeckis’s 2009 animated film “A Christmas Carol,” playing it over and over. But the grown-ups objected that the very realistic, almost-but-not-quite-human figures just seemed creepy.
The movie is a classic example of the phenomenon known as “the uncanny valley.” Up to a point, we prefer animated characters who look like people, but when those characters look too much like actual humans, they become weird and unsettling. In a 2012 paper in the journal Cognition, Kurt Gray of the University of North Carolina and Dan Wegner of Harvard demonstrated the uncanny valley systematically. They showed people images of three robots—one that didn’t look human at all, one that looked like a cartoon and one that was more realistic. Most people preferred the cartoony robot and thought the realistic one was strange.
But where does the uncanny valley come from? And why didn’t it bother my grandchildren in “A Christmas Carol”? Some researchers have suggested that the phenomenon is rooted in an innate tendency to avoid humans who are abnormal in some way. But the uncanny valley might also reflect our ideas about minds and brains. A realistic robot looks as if it might have a mind, even though it isn’t a human mind, and that is unsettling. In the Gray study, the more people thought that the robot had thoughts and feelings, the creepier it seemed.
Kimberly Brink and Henry Wellman of the University of Michigan, along with Gray, designed a study to determine whether children experience the uncanny valley. In a 2017 paper in the journal Child Development, they showed 240 children, ages 3 to 18, the same three robots that Gray showed to adults. They asked the children how weird the robots were and whether they could think and feel for themselves. Surprisingly, until the children hit age 9, they didn’t see anything creepy about the realistic robots. Like my grandchildren, they were unperturbed by the almost human.
The development of the uncanny valley in children tracked their developing ideas about robots and minds. Younger children had no trouble with the idea that the realistic robot had a mind. In fact, in other studies, young children are quite sympathetic to robots—they’ll try to comfort one that is fallen or injured. But the older children had a more complicated view. They began to feel that robots weren’t the sort of things that should have minds, and this contributed to their sense that the realistic robot was creepy.
The uncanny valley turns out to be something we develop, not something we’re born with. But this raises a possibility that is itself uncanny. Right now, robots are very far from actually having minds; it is remarkably difficult to get them to do even simple things. But suppose a new generation of children grows up with robots that actually do have minds, or at least act as if they do. This study suggests that those children may never experience an uncanny valley at all. In fact, it is possible that the young children in the study are already being influenced by the increasingly sophisticated machines around them. My grandchildren regularly talk to Alexa and make a point of saying “please” if she doesn’t answer right away.
Long before there were robots, people feared the almost human, from the medieval golem to Frankenstein’s monster. Perhaps today’s children will lead the way in broadening our sympathies to all sentient beings, even artificial ones. I hope so, but I don’t think they’ll ever get me to like the strange creatures in that Christmas movie.
He was tall and rugged, with piercing blue eyes, blond hair and a magnificent jawline. And what was that slung across his chest? A holster for his Walther PPK? When I saw what the actor Daniel Craig—aka James Bond—was actually toting, my heart skipped a beat. It was an elegant, high-tech baby carrier, so that he could snuggle his baby daughter.
When a paparazzo recently snapped this photo of Mr. Craig, an online kerfuffle broke out after one obtuse commentator accused him of being “emasculated.” Now science has come to Mr. Craig’s defense. A new study of gorillas in Nature Scientific Reports, led by Stacy Rosenbaum and colleagues at Northwestern University and the Dian Fossey Fund, suggests that taking care of babies makes you sexy—at least, if you’re a male gorilla.
The study began with a counterintuitive observation: Even silverback gorillas, those stereotypically fearsome and powerful apes, turn out to love babies. Adult male gorillas consistently groom and even cuddle with little ones. And the gorillas don’t care only about their own offspring; they’re equally willing to hang out with other males’ babies.
The researchers analyzed the records of 23 male gorillas that were part of a group living in the mountains of Rwanda. From 2003 to 2004, observers recorded how much time each male spent in close contact with an infant. By 2014, about 100 babies had been born in the group, and the researchers used DNA, collected from the gorillas’ feces, to work out how many babies each male had fathered. Even when they controlled for other factors like age and status, there turned out to be a strong correlation between caring for children and sexual success. The males who were most attentive to infants sired five times more children than the least attentive. This suggests that females may have been preferentially selecting the males who cared for babies.
These results tell us something interesting about gorillas, but they may also help answer a crucial puzzle about human evolution. Human babies are much more helpless, for a much longer time, than those of other species. That long childhood is connected to our exceptionally large brains and capacity for learning. It also means that we have developed a much wider range of caregivers for those helpless babies. In particular, human fathers help take care of infants and they “pair bond” with mothers.
We take this for granted, but human fathers are actually much more monogamous, and invest more in their babies, than almost any other mammal, including our closest great ape relatives. (Only 5% of mammal species exhibit pair-bonding.) On the other hand, humans aren’t as exclusively monogamous as some other animals—some birds, for example. And human fathers are optional or voluntary caregivers, as circumstances dictate. They don’t always care for their babies, but when they do, they are just as effective and invested as mothers.
Our male primate ancestors must have evolved from the typical indifferent and promiscuous mammalian father into a committed human dad. The gorillas may suggest an evolutionary path that allowed this transformation to take place. And a crucial part of that path may be that men have a fondness for babies in general, whether or not they are biologically related.
Mr. Craig was on to something: You don’t really need dry martinis and Aston Martins to appeal to women. A nice Baby Bjorn will do.
If, like me, you’re on the wrong side of sixty, you’ve probably noticed those increasingly frequent and sinister “senior moments.” What was I looking for when I came into the kitchen? Did I already take out the trash? What’s old what’s-his-name’s name again?
One possible reaction to aging is resignation: You’re just past your expiration date. You may have heard that centuries ago the average life expectancy was only around 40 years. So you might think that modern medicine and nutrition are keeping us going past our evolutionary limit. No wonder the machine starts to break down.
In fact, recent research suggests a very different picture. The shorter average life expectancy of the past mainly reflects the fact that many more children died young. If you made it past childhood, however, you might well live into your 60s or beyond. In today’s hunter-gatherer cultures, whose way of life is closer to that of our prehistoric ancestors, it’s fairly common for people to live into their 70s. That is in striking contrast to our closest primate relatives, chimpanzees, who very rarely live past their 50s.
There seem to be uniquely human genetic adaptations that keep us going into old age and help to guard against cognitive decline. This suggests that the later decades of our lives are there for a reason. Human beings are uniquely cultural animals; we crucially depend on the discoveries of earlier generations. And older people are well suited to passing on their accumulated knowledge and wisdom to the next generation.
Michael Gurven, an anthropologist at the University of California, Santa Barbara, and his colleagues have been studying aging among the Tsimane, a group in the Bolivian Amazon. The Tsimane live in a way that is more like the way we all lived in the past, through hunting, gathering and small-scale farming of local foods, with relatively little schooling or contact with markets and cities. Many Tsimane are in their 60s or 70s, and some even make it to their 80s.
In a 2017 paper in the journal Developmental Psychology, Prof. Gurven and colleagues gave over 900 Tsimane people a battery of cognitive tasks. Older members of the group had a lot of trouble doing things like remembering a list of new words. But the researchers also asked their subjects to quickly name as many different kinds of fish or plants as they could. This ability improved as the Tsimane got older, peaking around age 40 and staying high even in old age.
Research on Western urban societies has produced similar findings. This suggests that our cognitive strengths and weaknesses change as we age, rather than just undergoing a general decline. Things like short-term memory and processing speed—what’s called “fluid intelligence”—peak in our 20s and decline precipitously in older age. But “crystallized intelligence”—how much we actually know, and how well we can access that knowledge—improves up to middle age, and then declines much more slowly, if at all.
So when I forget what happened yesterday but can tell my grandchildren and students vivid stories about what happened 40 years ago, I may not be falling apart after all. Instead, I may be doing just what evolution intended.
In 19th-century England, the Brontë children created Gondal, an imaginary kingdom full of melodrama and intrigue. Emily and Charlotte Brontë grew up to write the great novels “Wuthering Heights” and “Jane Eyre.” The fictional land of Narnia, chronicled by C.S. Lewis in a series of classic 20th-century novels, grew out of Boxen, an imaginary kingdom that Lewis shared with his brother when they were children. And when the novelist Anne Perry was growing up in New Zealand in the 1950s, she and another girl created an imaginary kingdom called Borovnia as part of an obsessive friendship that ended in murder—the film “Heavenly Creatures” tells the story.
But what about Abixia? Abixia is an island nation on the planet Rooark, with its own currency (the iinter, divided into 12 skilches), flag and national anthem. It’s inhabited by cat-humans who wear flannel shirts and revere Swiss army knives—the detailed description could go on for pages. And it was created by a pair of perfectly ordinary Oregon 10-year-olds.
Abixia is a “paracosm,” an extremely detailed and extensive imaginary world with its own geography and history. The psychologist Marjorie Taylor at the University of Oregon and her colleagues discovered Abixia, and many other worlds like it, by talking to children. Most of what we know about paracosms comes from writers who described the worlds they created when they were children. But in a paper forthcoming in the journal Child Development, Prof. Taylor shows that paracosms aren’t just the province of budding novelists. Instead, they are a surprisingly common part of childhood.
Prof. Taylor asked 169 children, ages eight to 12, whether they had an imaginary world and what it was like. They found that about 17 percent of the children had created their own complicated universe. Often a group of children would jointly create a world and maintain it, sometimes for years, like the Brontë sisters or the Lewis brothers. And grown-ups were not invited in.
Prof. Taylor also tried to find out what made the paracosm creators special. They didn’t score any higher than other children in terms of IQ, vocabulary, creativity or memory. Interestingly, they scored worse on a test that measured their ability to inhibit irrelevant thoughts. Focusing on the stern and earnest real world may keep us from wandering off into possible ones.
But the paracosm creators were better at telling stories, and they were more likely to report that they also had an imaginary companion. In earlier research, Prof. Taylor found that around 66% of preschoolers have imaginary companions; many paracosms began with older children finding a home for their preschool imaginary friends.
Children with paracosms, like children with imaginary companions, weren’t neurotic loners either, as popular stereotypes might suggest. In fact, if anything, they were more socially skillful than other children.
Why do imaginary worlds start to show up when children are eight to 12 years old? Even when 10-year-olds don’t create paracosms, they seem to have a special affinity for them—think of all the young “Harry Potter” fanatics. And as Prof. Taylor points out, paracosms seem to be linked to all the private clubhouses, hidden rituals and secret societies of middle childhood.
Prof. Taylor showed that preschoolers who create imaginary friends are particularly good at understanding other people’s minds—they are expert at everyday psychology. For older children, the agenda seems to shift to what we might call everyday sociology or geography. Children may create alternative societies and countries in their play as a way of learning how to navigate real ones in adult life.
Of course, most of us leave those imaginary worlds behind when we grow up—the magic portals close. The mystery that remains is how great writers keep the doors open for us all.
In recent weeks, an orca in the Pacific Northwest carried her dead calf around with her for 17 days. It looked remarkably like grief. Indeed, there is evidence that many cetaceans—that is, whales, dolphins and porpoises—have strong and complicated family and social ties. Some species hunt cooperatively; others practice cooperative child care, taking care of one another’s babies.
The orcas, in particular, develop cultural traditions. Some groups hunt only seals, others eat only salmon. What’s more, they are one of very few species with menopausal grandmothers. Elderly orca females live well past their fertility and pass on valuable information and traditions to their children and grandchildren. Other cetaceans have cultural traditions too: Humpback whales learn their complex songs from other whales as they pass through the breeding grounds in the southern Pacific Ocean.
We also know that cetaceans have large and complex brains, even relative to their large bodies. Is there a connection? Is high intelligence the result of social and cultural complexity? This question is the focus of an important recent study by researchers Kieran Fox, Michael Muthukrishna and Suzanne Shultz, published last October in the journal Nature Ecology and Evolution.
Their findings may shed light on human beings as well. How and why did we get to be so smart? After all, in a relatively short time, humans developed much larger brains than their primate relatives, as well as powerful social and cultural skills. We cooperate with each other—at least most of the time—and our grandmothers, like grandmother orcas, pass on knowledge from one generation to the next. Did we become so smart because we are so social?
Humans evolved millions of years ago, so without a time machine, it’s hard to find out what actually happened. A clever alternate approach is to look at the cetaceans. These animals are very different from us, and their evolutionary history diverged from ours 95 million years ago. But if there is an intrinsic relationship between intelligence and social life, it should show up in whales and dolphins as well as in humans.
Dr. Fox and colleagues compiled an extensive database, recording as much information as they could find about 90 different species of cetaceans. They then looked at whether there was a relationship between the social lives of these animals and their brain size. They discovered that species living in midsized social groups, with between two and 50 members, had the largest brains, followed by animals who lived in very large pods with hundreds of animals. Solitary animals had the smallest brains. The study also found a strong correlation between brain size and social repertoire: Species who cooperated, cared for each other’s young and passed on cultural traditions had larger brains than those who did not.
Which came first, social complexity or larger brains? Dr. Fox and colleagues conducted sophisticated statistical analyses that suggested there was a feedback loop between intelligence and social behavior. Living in a group allowed for more complex social lives that rewarded bigger brains. Animals who excelled at social interaction could obtain more resources, which allowed them to develop yet bigger brains. This kind of feedback loop might also account for the explosively fast evolution of human beings.
Of course, intelligence is a relative term. The orcas’ cognitive sophistication and social abilities haven’t preserved them from the ravages of environmental change. The orca grieving her dead baby was, sadly, all too typical of her endangered population. It remains to be seen whether our human brains can do any better.
What is it like to be a baby? Very young children can’t tell us what their experiences are like, and none of us can remember the beginnings of our lives. So it would seem that we have no way of understanding baby consciousness, or even of knowing if babies are conscious at all.
But some fascinating new neuroscience research is changing that. It turns out that when adults dream or have psychedelic experiences, their brains are functioning more like children’s brains. It appears that the experience of babies and young children is more like dreaming or tripping than like our usual grown-up consciousness.
As we get older, the brain’s synapses—the connections between neurons—start to change. The young brain is very “plastic,” as neuroscientists say: Between birth and about age 5, the brain easily makes new connections. A preschooler’s brain has many more synapses than an adult brain. Then comes a kind of tipping point. Some connections, especially the ones that are used a lot, become longer, stronger and more efficient. But many other connections disappear—they are “pruned.”
What’s more, different areas of the brain are active in children and adults. Parts of the back of the brain are responsible for things like visual processing and perception. These areas mature quite early and are active even in infancy. By contrast, areas at the very front of the brain, in the prefrontal cortex, aren’t completely mature until after adolescence. The prefrontal cortex is the executive office of the brain, responsible for focus, control and long-term planning.
Like most adults, I spend most of my waking hours thinking about getting things done. Scientists have discovered that when we experience the world in this way, the brain sends out signals along the established, stable, efficient networks that we develop as adults. The prefrontal areas are especially active and have a strong influence on the rest of the brain. In short, when we are thinking like grown-ups, our brains look very grown-up too.
But recently, neuroscientists have started to explore other states of consciousness. In research published in Nature in 2017, Giulio Tononi of the University of Wisconsin and colleagues looked at what happens when we dream. They measured brain activity as people slept, waking them up at regular intervals to ask whether they had been dreaming. Then the scientists looked at what the brain had been doing just before the sleepers woke up. When people reported dreaming, parts of the back of the brain were much more active—like the areas that are active in babies. The prefrontal area, on the other hand, shuts down during sleep.
A number of recent studies also explore the brain activity that accompanies psychedelic experiences. A study published last month in the journal Cell by David Olson of the University of California, Davis, and colleagues looked at how mind-altering chemicals affect synapses in rats. They found that a wide range of psychedelic chemicals made the brain more plastic, leading brain cells to grow more connections. It’s as if the cells went back to their malleable, infantile state.
In other words, the brains of dreamers and trippers looked more like those of young children than those of focused, hard-working adults. In a way, this makes sense. When you have a dream or a psychedelic experience, it’s hard to focus your attention or control your thoughts—which is why reporting these experiences is notoriously difficult. At the same time, when you have a vivid nightmare or a mind-expanding experience, you certainly feel more conscious than you are in boring, everyday life.
In the same way, an infant’s consciousness may be less focused and controlled than an adult’s but more vivid and immediate, combining perception, memory and imagination. Being a baby may be both stranger and more intense than we think.
Why am I afraid to die? Maybe it’s the “I” in that sentence. It seems that I have a single constant self—the same “I” who peered out from my crib is now startled to see my aging face in the mirror 60 years later. It’s my inner observer, chief executive officer and autobiographer. It’s terrifying to think that this “I” will just disappear.
But what if this “I” doesn’t actually exist? For more than 2,000 years, Buddhist philosophers have argued that the self is an illusion, and many contemporary philosophers and psychologists agree. Buddhists say this realization should make us fear death less. The person I am now will be replaced by the person I am in five years, anyway, so why worry if she vanishes for good?
A recent paper in the journal Cognitive Science has an unusual combination of authors. A philosopher, a scholar of Buddhism, a social psychologist and a practicing Tibetan Buddhist tried to find out whether believing in Buddhism really does change how you feel about your self—and about death.
The philosopher Shaun Nichols of the University of Arizona and his fellow authors studied Christian and nonreligious Americans, Hindus and both everyday Tibetan Buddhists and Tibetan Buddhist monks. Among other questions, the researchers asked participants about their sense of self—for example, how strongly they believed they would be the same five years from now. Religious and nonreligious Americans had the strongest sense of self, and the Buddhists, especially the monks, had the least.
In previous work, Prof. Nichols and other colleagues showed that changing your sense of self really could make you act differently. A weaker sense of self made you more likely to be generous to others. The researchers in the new study predicted that the Buddhists would be less frightened of death.
The results were very surprising. Most participants reported about the same degree of fear, whether or not they believed in an afterlife. But the monks said that they were much moreafraid of death than any other group did.
Why would this be? The Buddhist scholars themselves say that merely knowing there is no self isn’t enough to get rid of the feeling that the self is there. Neuroscience supports this idea. Our sense of self, and the capacities like autobiographical memory and long-term planning that go with it, activates something called the default mode network—a set of connected brain areas. Long-term meditators have a less-active default mode network, but it takes them years to break down the idea of the self, and the monks in this study weren’t expert meditators.
Another factor in explaining why these monks were more afraid of death might be that they were trained to think constantly about mortality. The Buddha, perhaps apocryphally, once said that his followers should think about death with every breath. Maybe just ignoring death is a better strategy.
There may be one more explanation for the results. Our children and loved ones are an extension of who we are. Their survival after we die is a profound consolation, even for atheists. Monks give up those intimate attachments.
I once advised a young man at Google headquarters who worried about mortality. He agreed that a wife and children might help, but even finding a girlfriend was a lot of work. He wanted a more efficient tech solution—like not dying. But maybe the best way of conquering both death and the self is to love somebody else.
Suddenly, computers can do things that seemed impossible not so many years ago, from mastering the game of Go and acing Atari games to translating text and recognizing images. The secret is that these programs learn from experience. The great artificial-intelligence boom depends on learning, and children are the best learners in the universe. So computer scientists are starting to look to children for inspiration.
Everybody knows that young children are insatiably curious, but I and other researchers in the field of cognitive development, such as Laura Schulz at the Massachusetts Institute of Technology, are beginning to show just how that curiosity works. Taking off from these studies, the computer scientists Deepak Pathak and Pulkit Agrawal have worked with others at my school, the University of California, Berkeley, to demonstrate that curiosity can help computers to learn, too.
One of the most common ways that machines learn is through reinforcement. The computer keeps track of when a particular action leads to a reward—like a higher score in a videogame or a winning position in Go. The machine tries to repeat rewarding sequences of actions and to avoid less-rewarding ones.
This technique still has trouble, however, with even simple videogames such as Super Mario Bros.—a game that children can master easily. One problem is that before you can score, you need to figure out the basics of how Super Mario works—the players jump over columns and hop over walls. Simply trying to maximize your score won’t help you learn these general principles. Instead, you have to go out and explore the Super Mario universe.
Another problem with reinforcement learning is that programs can get stuck trying the same successful strategy over and over, instead of risking something new. Most of the time, a new strategy won’t work better, but occasionally it will turn out to be much more effective than the tried-and-true one. You also need to explore to find that out.
The same holds for real life, of course. When I get a new smartphone, I use something like reinforcement learning: I try to get it to do specific things that I’ve done many times before, like call someone up. (How old school is that!) If the call gets made, I stop there. When I give the phone to my 4-year-old granddaughter, she wildly swipes and pokes until she has discovered functions that I didn’t even suspect were there. But how can you build that kind of curiosity into a computer?
Drs. Pathak and Agrawal have designed a program to use curiosity in mastering videogames. It has two crucial features to do just that. First, instead of just getting rewards for a higher score, it’s also rewarded for being wrong. The program tries to predict what the screen will look like shortly after it makes a new move. If the prediction is right, the program won’t make that move again—it’s the same old same old. But if the prediction is wrong, the program will make the move again, trying to get more information. The machine is always driven to try new things and explore possibilities.
Another feature of the new program is focus. It doesn’t pay attention to every unexpected pixel anywhere on the screen. Instead, it concentrates on the parts of the screen that can influence its actions, like the columns or walls in Mario’s path. Again, this is a lot like a child trying out every new action she can think of with a toy and taking note of what happens, even as she ignores mysterious things happening in the grown-up world. The new program does much better than the standard reinforcement-learning algorithms.
Super Mario is still a very limited world compared with the rich, unexpected, unpredictable real world that every 4-year-old has to master. But if artificial intelligence is really going to compete with natural intelligence, more childlike insatiable curiosity may help.
‘Grandmom, I love Mommy most, of course, but you do tell the best stories—especially Odysseus and the Cyclops.” This authentic, if somewhat mixed, review from my grandson may capture a profound fact about human nature. A new study by Michael Gurven and colleagues suggests that grandparents really may be designed to pass on the great stories to their grandchildren.
One of the great puzzles of human evolution is why we have such a distinctive “life history.” We have much longer childhoods than any other primate, and we also live much longer, well past the age when we can fully pull our weight. While people in earlier generations had a shorter life expectancy overall, partly because many died in childhood, some humans have always lived into their 60s and 70s. Researchers find it especially puzzling that female humans have always lived well past menopause. Our closest primate relatives die in their 50s.
Perhaps, some anthropologists speculate, grandparents evolved to provide another source of food and care for all those helpless children. I’ve written in these pages about what the anthropologist Kristen Hawkes of the University of Utah has called “the grandmother hypothesis.” Prof. Hawkes found that in forager cultures, also known as hunter-gatherer societies, the food that grandmothers produce makes all the difference to their grandchildren’s survival.
In contrast, Dr. Gurven and his colleagues focus more on how human beings pass on information from one generation to another. Before there was writing, human storytelling was one of the most important kinds of cultural transmission. Could grandparents have adapted to help that process along?
Dr. Gurven’s team, writing earlier this year in the journal Evolution and Human Behavior, studied the Tsimane in Amazonia, a community in the Amazon River basin who live as our ancestors once did. The Tsimane, more than 10,000 strong, gather and garden, hunt and fish, without much involvement in the market economy. And they have a rich tradition of stories and songs. They have myths about Dojity and Micha, creators of the Earth, with the timeless themes of murder, adultery and revenge. They also sing melancholy songs about rejected love (the blues may be a universal part of human nature).
During studies of the Tsimane spread over a number of years, Dr. Gurven and his colleagues conducted interviews to find out who told the most stories and sang the most songs, who was considered the best in each category and who the audience was for these performances. The grandparents, people from age 60 to 80, most frequently came out on top. While only 5% of Tsimane aged 15 to 29 told stories, 44% of those aged 60 to 80 did. And the elders’ most devoted audiences were their much younger kin. When the researchers asked where the Tsimane had heard stories, 84% of them came from older relatives other than parents, particularly grandparents.
This preference for grandparents may be tied to the anthropological concept of “alternate generations.” Parents may be more likely to pass on the practical skills of using a machete or avoiding a jaguar, while their own parents pass on the big picture of how a community understands the world and itself. Other studies have found that relations between grandparents and grandchildren tend to be more egalitarian than the “I told you not to do that” relationship between so many parents and children.
Grandparents may play a less significant cultural role in a complex, mobile modern society. Modern pop stars and TV showrunners are more likely to be millennial than menopausal. But when they get the chance, grandmas and grandpas still do what they’ve done across the ages—turning the attention of children to the very important business of telling stories and singing songs.
When adults look out at other people, we have what psychologists and philosophers call a “theory of mind”—that is, we think that the people around us have feelings, emotions and beliefs just as we do. And we somehow manage to read complex mental states in their sounds and movements.
But what do babies see when they look out at other people? They know so much less than we do. It’s not hard to imagine that, as we coo and mug for them, they only see strange bags of skin stuffed into clothes, with two restless dots at the top and a hole underneath that opens and closes.
Our sophisticated grown-up understanding of other people develops through a long process of learning and experience. But babies may have more of a head start than we imagine. A new study by Andrew Meltzoff and his colleagues at the University of Washington, published in January in the journal Developmental Science, finds that our connection to others starts very early.
Dr. Meltzoff has spent many years studying the way that babies imitate the expressions and actions of other people. Imitation suggests that babies do indeed connect their own internal feelings to the behavior of others. In the new study, the experimenters looked at how this ability is reflected in baby’s brains.
Studies with adults have shown that some brain areas activate in the same way when I do something as when I see someone else do the same thing. But, of course, adults have spent many years watching other people and experiencing their own feelings. What about babies?
The trouble is that studying babies’ brains is really hard. The typical adult studies use an FMRI (functional magnetic resonance imaging) machine: Participants have to lie perfectly still in a very noisy metal tube. Some studies have used electroencepholography to measure baby’s brain waves, but EEG only tells you when a baby’s brain is active. It doesn’t say where that brain activity is taking place.
Dr. Meltzoff and colleagues have been pioneers in using a new technique called magnetoencephalography with very young babies. Babies sit in a contraption that’s like a cross between a car seat and an old-fashioned helmet hairdryer. The MEG machine measures very faint and subtle magnetic signals that come from the brain, using algorithms to correct for a wriggling baby’s movements.
In this study, the experimenters used MEG with 71 very young babies—only seven months old. They recorded signals from a part of the brain called the “somatosensory” cortex. In adults, this brain area is a kind of map of your body. Sensations in different body parts activate different “somatosensory” areas, which correspond to the arrangement of the body: Hand areas are near arm areas, leg areas are near feet.
One group of babies felt a small puff of air on their hand or their foot. The brain activation pattern for hands and feet turned out to be different, just as it is for grown-ups. Then the experimenters showed other babies a video of an adult hand or foot that was touched by a rod. Surprisingly, seeing a touch on someone else’s hand activated the same brain area as feeling a touch on their own hand, though more faintly. The same was true for feet.
These tiny babies already seemed to connect their own bodily feelings to the feelings of others. These may be the first steps toward a fully-fledged theory of mind. Even babies, it turns out, don’t just see bags of skin. We seem to be born with the ability to link our own minds and the minds of others.
Figuring out why teenagers act the way they do is a challenge for everybody, scientists as well as parents. For a long time, society and science focused on adolescents’ problems, not on their strengths. There are good reasons for this: Teens are at high risk for accidents, suicide, drug use and overall trouble. The general perception was that the teenage brain is defective in some way, limited by a relatively undeveloped prefrontal cortex or overwhelmed by hormones.
In the past few years, however, some scientists have begun to think differently. They see adolescence as a time of both risk and unusual capacities. It turns out that teens are better than both younger children and adults at solving some kinds of problems.
Teenagers seem to be especially adept at processing social information, particularly social information about their peers. From an evolutionary perspective, this makes a lot of sense. Adolescence is, after all, when you leave the shelter of your parents for the big world outside. A 2017 study by Maya Rosen and Katie McLaughlin of the University of Washington and their colleagues, published in the journal Developmental Science, is an important contribution to this new way of thinking about teens.
Young children don’t have to be terribly sensitive to the way that their parents feel, or even to how other children feel, in order to survive. But for teenagers, figuring out other people’s emotions becomes a crucial and challenging task. Did he just smile because he likes me or because he’s making fun of me? Did she just look away because she’s shy or because she’s angry?
The researchers studied 54 children and teenagers, ranging from 8 to 19 years old. They showed participants a series of faces expressing different emotions. After one face appeared, a new face would show up with either the same expression or a different one. The participants had to say, over the course of 100 trials, whether the new face’s expression matched the old one or was different.
While this was going on, the researchers also used an FMRI scanner to measure the participants’ brain activation. They focused on “the salience network”—brain areas that light up when something is important or relevant, particularly in the social realm.
Early adolescents, aged 14 or so, showed more brain activation when they saw an emotion mismatch than did either younger children or young adults.
Is this sensitivity a good thing or bad? The researchers also gave the participants a questionnaire to measure social anxiety and social problems. They asked the children and adolescents how well different sentences described them. They would say, for example, whether “I’m lonely” or “I don’t like to be with people I don’t know well” was a good description of how they felt. In other studies, these self-rating tests have turned out to be a robust measure of social anxiety and adjustment in the real world.
The researchers found that the participants whose brains reacted more strongly to the emotional expressions also reported fewer social problems and anxiety—they were less likely to say that they were lonely or avoided strangers. Having a brain that pays special attention to other people’s emotions allows you to understand and deal with those emotions better, and so to improve your social life.
The brains of young teenagers were especially likely to react to emotion in this way. And, of course, this period of transition to adult life is especially challenging socially. The young adolescents’ increased sensitivity appears to be an advantage in figuring out their place in the world. Rather than being defective, their brains functioned in a way that helped them deal with the special challenges of teenage life.
We humans have an exceptionally long childhood, generally bear just one child at a time and work hard to take care of our children. Is this related to our equally distinctive large brains and high intelligence? Biologists say that, by and large, the smarter species of primates and even birds mature later, have fewer babies, and invest more in those babies than do the dimmer species.
“Intelligence” is defined, of course, from a human perspective. Plenty of animals thrive and adapt without a large brain or developed learning abilities.
But how far in the animal kingdom does this relationship between learning and life history extend? Butterflies are about as different from humans as could be—laying hundreds of eggs, living for just a few weeks and possessing brains no bigger than the tip of a pen. Even by insect standards, they’re not very bright. A bug-loving biology teacher I know perpetually complains that foolish humans prefer pretty but vapid butterflies to her brilliant pet cockroaches.
But entomologist Emilie Snell-Rood at the University of Minnesota and colleagues have found a similar relationship of learning to life-history in butterflies. The insects that are smarter have a longer period of immaturity and fewer babies. The research suggests that these humble creatures, which have existed for roughly 50 million years, can teach us something about how to adapt to a quickly changing world.
Climate change or habitat loss drives some animals to extinction. But others alter the development of their bodies or behavior to suit a changing new environment, demonstrating what scientists call “developmental plasticity.” Dr. Snell-Rood wants to understand these fast adaptations, especially those caused by human influence. She has shown, for example, how road-salting has altered the development of curbside butterflies.
Learning is a particularly powerful kind of plasticity. Cabbage white butterflies, the nemesis of the veggie gardener, flit from kale to cabbage to chard, in search of the best host for their eggs after they hatch and larvae start munching. In a 2009 paper in the American Naturalist, Dr. Snell-Rood found that all the bugs start out with a strong innate bias toward green plants such as kale. But some adventurous and intelligent butterflies may accidentally land on a nutritious red cabbage and learn that the red leaves are good hosts, too. The next day those smart insects will be more likely to seek out red, not just green, plants.
In a 2011 paper in the journal Behavioral Ecology, Dr. Snell-Rood showed that the butterflies who were better learners also took longer to reach reproductive maturity, and they produced fewer eggs overall. When she gave the insects a hormone that sped up their development, so that they grew up more quickly, they were worse at learning.
In a paper in the journal Animal Behaviour published this year, Dr. Snell-Rood looked at another kind of butterfly intelligence. The experimenters presented cabbage whites with a choice between leaves that had been grown with more or less fertilizer, and leaves that either did or did not have a dead, carefully posed cabbage white pinned to them.
Some of the insects laid eggs all over the place. But some preferred the leaves that were especially nutritious. What’s more, these same butterflies avoided leaves that were occupied by other butterflies, where the eggs would face more competition. The choosier butterflies, like the good learners, produced fewer eggs overall. There was a trade-off between simply producing more young and taking the time and care to make sure those young survived.
In genetic selection, an organism produces many kinds of offspring, and only the well-adapted survive. But once you have a brain that can learn, even a butterfly brain, you can adapt to a changing environment in a single generation. That will ensure more reproductive success in the long run.
Inequality starts early. In 2015, 23% of American children under 3 grew up in poverty, according to the Census Bureau. By the time children reach first grade, there are already big gaps, based on parents’ income, in academic skills like reading and writing. The comparisons look even starker when you contrast middle-class U.S. children and children in developing countries like Peru.
Can schooling reverse these gaps, or are they doomed to grow as the children get older? Scientists like me usually study preschoolers in venues like university preschools and science museums. The children are mostly privileged, with parents who have given them every advantage and are increasingly set on giving instruction to even the youngest children. So how can we reliably test whether certain skills are the birthright of all children, rich or poor?
My psychology lab at the University of California, Berkeley has been trying to provide at least partial answers, and my colleagues and I published some of the results in the Aug. 23 edition of the journal Child Development. Our earlier research has found that young children are remarkably good at learning. For example, they can figure out cause-and-effect relationships, one of the foundations of scientific thinking.
How can we ask 4-year-olds about cause and effect? We use what we call “the blicket detector”—a machine that lights up when you put some combinations of different-shaped blocks on it but not others. The subjects themselves don’t handle the blocks or the machine; an experimenter demonstrates it for them, using combinations of one and two blocks.
In the training phase of the experiment, some of the young children saw a machine that worked in a straightforward way—some individual blocks made it go and others didn’t. The rest of the children observed a machine that worked in a more unusual way—only a combination of two specific blocks made it go. We also used the demonstration to train two groups of adults.
Could the participants, children and adults alike, use the training data to figure out how a new set of blocks worked? The very young children did. If the training blocks worked the unusual way, they thought that the new blocks would also work that way, and they used that assumption to determine which specific blocks caused the machine to light up. Butmost of the adults didn’t get it—they stuck with the obvious ideathat only one block was needed to make the machine run.
In the Child Development study of 290 children, we set out to see what less-privileged children would do. We tested 4-year-old Americans in preschools for low-income children run by the federal Head Start program, which also focuses on health, nutrition and parent involvement. These children did worse than middle-class children on vocabulary tests and “executive function”—the ability to plan and focus. But the poorer children were just as good as their wealthier counterparts at finding the creative answer to the cause-and effect problems.
Then, in Peru, we studied 4-year-olds in schools serving families who mostly have come from the countryside and settled in the outskirts of Lima, and who have average earnings of less than $12,000 a year. These children also did surprisingly well. They solved even the most difficult tasks as well as the middle-class U.S. children (and did better than adults in Peru or the U.S.).
Though the children we tested weren’t from wealthy families, their parents did care enough to get them into preschool. We didn’t look at how children with less social support would do. But the results suggest that you don’t need middle-class enrichment to be smart. All children may be born with the ability to think like creative scientists. We need to make sure that those abilities are nurtured, not neglected.
How does a new song go viral, replacing the outmoded hits of a few years ago? How are favorite dishes passed on through the generations, from grandmother to grandchild? Two new papers in the Proceedings of the National Academy of Sciences examine the remarkable and distinctive ability to transmit culture. The studies describe some of the most culturally sophisticated beings on Earth.
Or, to be more precise, at sea. Whales and other cetaceans, such as dolphins and porpoises, turn out to have more complex cultural abilities than any other animal except us.
For a long time, people thought that culture was uniquely human. But new studies show that a wide range of animals, from birds to bees to chimpanzees, can pass on information and behaviors to others. Whales have especially impressive kinds of culture, which we are only just beginning to understand, thanks to the phenomenal efforts of cetacean specialists. (As a whale researcher once said to me with a sigh, “Just imagine if each of your research participants was the size of a 30-ton truck.”)
One of the new studies, by Ellen Garland of the University of St. Andrews in Scotland and her colleagues, looked at humpback whale songs. Only males sing them, especially in the breeding grounds, which suggests that music is the food of love for cetaceans, too—though the exact function of the songs is still obscure.
The songs, which can last for as long as a half-hour, have a complicated structure, much like human language or music. They are made up of larger themes constructed from shorter phrases, and they have the whale equivalent of rhythm and rhyme. Perhaps that’s why we humans find them so compelling and beautiful.
The songs also change as they are passed on, like human songs. All the male whales in a group sing the same song, but every few years the songs are completely transformed. Researchers have trailed the whales across the Pacific, recording their songs as they go. The whales learn the new songs from other groups of whales when they mingle in the feeding grounds. But how?
The current paper looked at an unusual set of whales that produced rare hybrid songs—a sort of mashup of songs from different groups. Hybrids showed up as the whales transitioned from one song to the next. The hybrids suggested that the whales weren’t just memorizing the songs as a single unit. They were taking the songs apart and putting them back together, creating variations using the song structure.
The other paper, by Hal Whitehead of Dalhousie University in Halifax, Nova Scotia, looked at a different kind of cultural transmission in another species, the killer whale. The humpback songs spread horizontally, passing from one virile young thing to the next, like teenage fashions. But the real power of culture comes when caregivers can pass on discoveries to the next generation. That sort of vertical transmission is what gives human beings their edge.
Killer whales stay with their mothers for as long as the mothers live, and mothers pass on eating traditions. In the same patch of ocean, you will find some whales that only eat salmon and other whales that only eat mammals, and these preferences are passed on from mother to child.
Even grandmothers may play a role. Besides humans, killer whales are the only mammal whose females live well past menopause. Those old females help to ensure the survival of their offspring, and they might help to pass on a preference for herring or shark to their grandchildren, too. (That may be more useful than my grandchildren’s legacy—a taste for Montreal smoked meat and bad Borscht Belt jokes.)
Dr. Whitehead argues that these cultural traditions may even lead to physical changes. As different groups of whales become isolated from each other, the salmon eaters in one group and the mammal eaters in another, there appears to be a genetic shift affecting things such as their digestive abilities. The pattern should sound familiar: It’s how the cultural innovation of dairy farming led to the selection of genes for lactose-tolerance in humans. Even in whales, culture and nature are inextricably entwined.
In September 1678, a brilliant young Irish scientist named William Molyneux married the beautiful Lucy Domville. By November she had fallen ill and become blind, and the doctors could do nothing for her. Molyneux reacted by devoting himself to the study of vision.
He also studied vision because he wanted to resolve some big philosophical issues: What kinds of knowledge are we born with? What is learned? And does that learning have to happen at certain stages in our lives? In 1688 he asked the philosopher John Locke:Suppose someone who was born blind suddenly regained their sight? What would they understand about the visual world?
In the 17th century, Molyneux’s question was science fiction. Locke and his peers enthusiastically debated and speculated about the answer, but there was no way to actually restore a blind baby’s sight. That’s no longer true today. Some kinds of congenital blindness, such as congenital cataracts, can be cured.
More than 300 years after Molyneux, another brilliant young scientist, Pawan Sinha of the Massachusetts Institute of Technology, has begun to find answers to his predecessor’s questions. Dr. Sinha has produced a substantial body of research, culminating in a paper last month in the Proceedings of the National Academy of Sciences.
Like Molyneux, he was moved by both philosophical questions and human tragedy. When he was growing up, Dr. Sinha saw blind children begging on the streets of New Delhi. So in 2005 he helped to start Project Prakash, from the Sanskrit word for light. Prakash gives medical attention to blind children and teenagers in rural India. To date, the project has helped to treat more than 1,400 children, restoring sight to many.
Project Prakash has also given scientists a chance to answer Molyneux’s questions: to discover what we know about the visual world when we’re born, what we learn and when we have to learn it.
Dr. Sinha and his colleagues discovered that some abilities that might seem to be learned show up as soon as children can see. For example, consider the classic Ponzo visual illusion. When you see two equal horizontal lines drawn on top of a perspective drawing of receding railway ties, the top line will look much longer than the bottom one. You might have thought that illusion depends on learning about distance and perspective, but the newly sighted children immediately see the lines the same way.
On the other hand, some basic visual abilities depend more on experience at a critical time. When congenital cataracts are treated very early, children tend to develop fairly good visual acuity—the ability to see fine detail. Children who are treated much later don’t tend to develop the same level of acuity, even after they have had a lot of visual experience.
In the most recent study, Dr. Sinha and colleagues looked at our ability to tell the difference between faces and other objects. People are very sensitive to faces; special brain areas are dedicated to face perception, and babies can discriminate pictures of faces from other pictures when they are only a few weeks old.
The researchers studied five Indian children who were part of the Prakash project, aged 9 to 17, born blind but given sight. At first they couldn’t distinguish faces from similar pictures. But over the next few months they learned the skill and eventually they did as well as sighted children. So face detection had a different profile from both visual illusions and visual acuity—it wasn’t there right away, but it could be learned relatively quickly.
The moral of the story is that the right answer about nature versus nurture is…it’s complicated. And that sometimes, at least, searching for the truth can go hand-in-hand with making the world a better place.
There is no more chilling wartime phrase than “I was just following orders.” Surely, most of us think, someone who obeys a command to commit a crime is still acting purposely, and following orders isn’t a sufficient excuse. New studies help to explain how seemingly good people come to do terrible things in these circumstances: When obeying someone else, they do indeed often feel that they aren’t acting intentionally.
Patrick Haggard, a neuroscientist at University College London, has been engaged for years in studying our feelings of agency and intention. But how can you measure them objectively? Asking people to report such an elusive sensation is problematic. Dr. Haggard found another way. In 2002 he discovered that intentional action has a distinctive but subtle signature: It warps your sense of time.
People can usually perceive the interval between two events quite precisely, down to milliseconds. But when you act intentionally to make something happen—say, you press a button to make a sound play—your sense of time is distorted. You think that the sound follows your action more quickly than it actually does—a phenomenon called “intentional binding.” Your sense of agency somehow pulls the action and the effect together.
This doesn’t happen if someone else presses your finger to the button or if electrical stimulation makes your finger press down involuntarily. And this distinctive time signature comes with a distinctive neural signature too.
More recent studies show that following instructions can at times look more like passive, involuntary movement than like willed intentional action. In the journal Psychological Science last month, Peter Lush of the University of Sussex, together with colleagues including Dr. Haggard, examined hypnosis. Hypnosis is puzzling because people produce complicated and surely intentional actions—for example, imitating a chicken—but insist that they were involuntary.
The researchers hypnotized people and then suggested that they press a button making a sound. The hypnotized people didn’t show the characteristic time-distortion signature of agency. They reported the time interval between the action and the sound accurately, as if someone else had pressed their finger down. Hypnosis really did make the actions look less intentional.
In another study, Dr. Haggard and colleagues took off from the famous Milgram experiments of the 1960s. Social psychologist Stanley Milgram discovered that ordinary people were willing to administer painful shocks to someone else simply because the experimenter told them to. In Dr. Haggard’s version, reported in the journal Current Biology last year, volunteers did the experiment in pairs. If they pressed a button, a sound would play, the other person would get a brief but painful shock and they themselves would get about $20; each “victim” later got a chance to shock the aggressor.
Sometimes the participants were free to choose whether or not to press the button, and they shocked the other person about half the time. At other times the experimenter told the participants what to do.
In the free-choice trials, the participants showed the usual “intentional binding” time distortion: They experienced the task as free agents. Their brain activity, recorded by an electroencephalogram, looked intentional too.
But when the experimenter told participants to shock the other person, they did not show the signature of intention, either in their time perception or in their brain responses. They looked like people who had been hypnotized or whose finger was moved for them, not like people who had set out to move their finger themselves. Following orders was apparently enough to remove the feeling of free will.
These studies leave some big questions. When people follow orders, do they really lose their agency or does it just feel that way? Is there a difference? Most of all, what can we do to ensure that this very human phenomenon doesn’t lead to more horrific inhumanity in the future?
A few years ago, in my book “The Philosophical Baby,” I speculated that children might actually be more conscious, or at least more aware of their surroundings, than adults. Lots of research shows that we adults have a narrow “spotlight” of attention. We vividly experience the things that we focus on but are remarkably oblivious to everything else. There’s even a term for it: “inattentional blindness.” I thought that children’s consciousness might be more like a “lantern,” illuminating everything around it.
When the book came out, I got many fascinating letters about how children see more than adults. A store detective described how he would perch on an upper balcony surveying the shop floor. The grown-ups, including the shoplifters, were so focused on what they were doing that they never noticed him. But the little children, trailing behind their oblivious parents, would glance up and wave.
Of course, anecdotes and impressions aren’t scientific proof. But a new paper in press in the journal Psychological Science suggests that the store detective and I just might have been right.
One of the most dramatic examples of the adult spotlight is “change blindness.” You can show people a picture, interrupt it with a blank screen, and then show people the same picture with a change in the background. Even when you’re looking hard for the change, it’s remarkably difficult to see, although once someone points it out, it seems obvious. You can see the same thing outside the lab. Movie directors have to worry about “continuity” problems in their films because it’s so hard for them to notice when something in the background has changed between takes.
To study this problem, Daniel Plebanek and Vladimir Sloutsky at Ohio State University tested how much children and adults notice about objects and how good they are at detecting changes. The experimenters showed a series of images of green and red shapes to 34 children, age 4 and 5, and 35 adults. The researchers asked the participants to pay attention to the red shapes and to ignore the green ones. In the second part of the experiment, they showed another set of images of red and green shapes to participants and asked: Had the shapes remained the same or were they different?
Adults were better than children at noticing when the red shapes had changed. That’s not surprising: Adults are better at focusing their attention and learning as a result. But the children beat the adults when it came to the green shapes. They had learned more about the unattended objects than the adults and noticed when the green shapes changed. In other words, the adults only seemed to learn about the object in their attentional spotlight, but the children learned about the background, too.
We often say that young children are bad at paying attention. But what we really mean is that they’re bad at not paying attention, that they don’t screen out the world as grown-ups do. Children learn as much as they can about the world around them, even if it means that they get distracted by the distant airplane in the sky or the speck of paper on the floor when you’re trying to get them out the door to preschool.
Grown-ups instead focus and act effectively and swiftly, even if it means ignoring their surroundings. Children explore, adults exploit. There is a moral here for adults, too. We are often so focused on our immediate goals that we miss unexpected developments and opportunities. Sometimes by focusing less, we can actually see more.
So if you want to expand your consciousness, you can try psychedelic drugs, mysticism or meditation. Or you can just go for a walk with a 4-year-old.
I took my grandchildren this week to see “The Nutcracker.” At the crucial moment in the ballet, when the Christmas tree magically expands, my 3-year-old granddaughter, her head tilted up, eyes wide, let out an impressive, irrepressible “Ohhhh!”
The image of that enchanted tree captures everything marvelous about the holiday, for believers and secular people alike. The emotion that it evokes makes braving the city traffic and crowds worthwhile.
What the children, and their grandmother, felt was awe—that special sense of the vastness of nature, the universe, the cosmos, and our own insignificance in comparison. Awe can be inspired by a magnificent tree or by Handel’s “Hallelujah Chorus” or by Christmas Eve mass in the Notre-Dame de Paris cathedral.
But why does this emotion mean so much to us? Dacher Keltner, a psychologist who teaches (as I do) at the University of California, Berkeley, has been studying awe for 15 years. He and his research colleagues think that the emotion is as universal as happiness or anger and that it occurs everywhere with the same astonished gasp. In one study Prof. Keltner participated in, villagers in the Himalayan kingdom of Bhutan who listened to a brief recording of American voices immediately recognized the sound of awe.
Prof. Keltner’s earlier research has also shown that awe is good for us and for society. When people experience awe—looking up at a majestic sequoia, for example—they become more altruistic and cooperative. They are less preoccupied by the trials of daily life.
Why does awe have this effect? A new study, by Prof. Keltner, Yang Bai and their colleagues, conditionally accepted in the Journal of Personality and Social Psychology, shows how awe works its magic.
Awe’s most visible psychological effect is to shrink our egos, our sense of our own importance. Ego may seem very abstract, but in the new study the researchers found a simple and reliable way to measure it. The team showed their subjects seven circles of increasing size and asked them to pick the one that corresponded to their sense of themselves. Those who reported feeling more important or more entitled selected a bigger circle; they had bigger egos.
The researchers asked 83 participants from the U.S. and 88 from China to keep a diary of their emotions. It turned out that, on days when they reported feeling awe, they selected smaller circles to describe themselves.
Then the team arranged for more than a thousand tourists from many countries to do the circle test either at the famously awe-inspiring Yosemite National Park or at Fisherman’s Wharf on San Francisco’s waterfront, a popular but hardly awesome spot. Only Yosemite made participants from all cultures feel smaller.
Next, the researchers created awe in the lab, showing people awe-inspiring or funny video clips. Again, only the awe clips shrank the circles. The experimenters also asked people to draw circles representing themselves and the people close to them—with the distance between circles indicating how close they felt to others. Feelings of awe elicited more and closer circles; the awe-struck participants felt more social connection to others.
The team also asked people to draw a ladder and represent where they belonged on it—a reliable measure of status. Awe had no effect on where people placed themselves on this ladder—unlike an emotion such as shame, which takes people down a notch in their own eyes. Awe makes us feel less egotistical, but at the same time it expands our sense of well-being rather than diminishing it.
The classic awe-inspiring stimuli in these studies remind people of the vastness of nature: tall evergreens or majestic Yosemite waterfalls. But even very small stimuli can have the same effect. Another image of this season, a newborn child, transcends any particular faith, or lack of faith, and inspires awe in us all.
Why do we like people like us? We take it for granted that grown-ups favor the “in-group” they belong to and that only the hard work of moral education can overcome that preference. There may well be good evolutionary reasons for this. But is it a scientific fact that we innately favor our own?
A study in 2007, published in the Proceedings of the National Academy of Sciences by Katherine Kinzler and her colleagues, suggested that even babies might prefer their own group. The authors found that 10-month-olds preferred to look at people who spoke the same language they did. In more recent studies, researchers have found that babies also preferred to imitate someone who spoke the same language. So our preference for people in our own group might seem to be part of human nature.
But a new study in the same journal by Katarina Begus of Birkbeck, University of London and her colleagues suggests a more complicated view of humanity. The researchers started out exploring the origins of curiosity. When grown-ups think that they are about to learn something new, their brains exhibit a pattern of activity called a theta wave. The researchers fitted out 45 11-month-old babies with little caps covered with electrodes to record brain activity. The researchers wanted to see if the babies would also produce theta waves when they thought that they might learn something new.
The babies saw two very similar-looking people interact with a familiar toy like a rubber duck. One experimenter pointed at the toy and said, “That’s a duck.” The other just pointed at the object and instead of naming it made a noise: She said “oooh” in an uninformative way.
Then the babies saw one of the experimenters pick up an unfamiliar gadget. You would expect that the person who told you the name of the duck could also tell you about this new thing. And, sure enough, when the babies saw the informative experimenter, their brains produced theta waves, as if they expected to learn something. On the other hand, you might expect that the experimenter who didn’t tell you anything about the duck would also be unlikely to help you learn more about the new object. Indeed, the babies didn’t produce theta waves when they saw this uninformative person.
This experiment suggested that the babies in the earlier 2007 study might have been motivated by curiosity rather than by bias. Perhaps they preferred someone who spoke their own language because they thought that person could teach them the most.
So to test this idea, the experimenters changed things a little. In the first study, one experimenter named the object, and the other didn’t. In the new study, one experimenter said “That’s a duck” in English—the babies’ native language—while the other said, “Mira el pato,” describing the duck in Spanish—an unfamiliar language. Sure enough, their brains produced theta waves only when they saw the English speaker pick up the new object. The babies responded as if the person who spoke the same language would also tell them more about the new thing.
So 11-month-olds already are surprisingly sensitive to new information. Babies leap at the chance to learn something new—and can figure out who is likely to teach them. The babies did prefer the person in their own group, but that may have reflected curiosity, not bias. They thought that someone who spoke the same language could tell them the most about the world around them.
There is no guarantee that our biological reflexes will coincide with the demands of morality. We may indeed have to use reason and knowledge to overcome inborn favoritism toward our own group. But the encouraging message of the new study is that the desire to know—that keystone of human civilization—may form a deeper part of our nature than mistrust and discrimination.
Last week, I stumbled on a beautiful and moving picture of young children learning. It’s a fragment of a silent 1928 film from the Harold E. Jones Child Study Center in Berkeley, Calif., founded by a pioneer in early childhood education. The children would be in their 90s now. But in that long-distant idyll, in their flapper bobs and old-fashioned smocks, they play (cautiously) with a duck and a rabbit, splash through a paddling pool, dig in a sandbox, sing and squabble.
Suddenly, I had a shock. A teacher sawed a board in half, and a boy, surely no older than 5, imitated him with his own saw, while a small girl hammered in nails. What were the teachers thinking? Why didn’t somebody stop them?
My 21st-century reaction reflects a very recent change in the way that we think about children, risk and learning. In a recent paper titled “Playing with Knives” in the journal Child Development, the anthropologist David Lancy analyzed how young children learn across different cultures. He compiled a database of anthropologists’ observations of parents and children, covering over 100 preindustrial societies, from the Dusan in Borneo to the Pirahã in the Amazon and the Aka in Africa. Then Dr. Lancy looked for commonalities in what children and adults did and said.
In recent years, the psychologist Joseph Henrich and colleagues have used the acronym WEIRD—that is, Western, educated, industrialized, rich and democratic—to describe the strange subset of humans who have been the subject of almost all psychological studies. Dr. Lancy’s paper makes the WEIRDness of our modern attitudes toward children, for good or ill, especially vivid.
He found some striking similarities in the preindustrial societies that he analyzed. Adults take it for granted that young children are independently motivated to learn and that they do so by observing adults and playing with the tools that adults use—like knives and saws. There is very little explicit teaching.
And children do, in fact, become competent surprisingly early. Among the Maniq hunter-gatherers in Thailand, 4-year-olds skin and gut small animals without mishap. In other cultures, 3- to 5-year-olds successfully use a hoe, fishing gear, blowpipe, bow and arrow, digging stick and mortar and pestle.
The anthropologists were startled to see parents allow and even encourage their children to use sharp tools. When a Pirahã toddler played with a sharp 9-inch knife and dropped it on the ground, his mother, without interrupting her conversation, reached over and gave it back to him. Dr. Lancy concludes: “Self-initiated learners can be seen as a source for both the endurance of culture and of change in cultural patterns and practices.”
He notes that, of course, early knife skills can come at the cost of severed fingers. To me, like most adults in my WEIRD culture, that is far too great a risk even to consider.
But trying to eliminate all such risks from children’s lives also might be dangerous. There may be a psychological analog to the “hygiene hypothesis” proposed to explain the dramatic recent increase in allergies. Thanks to hygiene, antibiotics and too little outdoor play, children don’t get exposed to microbes as they once did. This may lead them to develop immune systems that overreact to substances that aren’t actually threatening—causing allergies.
In the same way, by shielding children from every possible risk, we may lead them to react with exaggerated fear to situations that aren’t risky at all and isolate them from the adult skills that they will one day have to master. We don’t have the data to draw firm causal conclusions. But at least anecdotally, many young adults now seem to feel surprisingly and irrationally fragile, fearful and vulnerable: I once heard a high schooler refuse to take a city bus “because of liability issues.”
Drawing the line between allowing foolhardiness and inculcating courage isn’t easy. But we might have something to learn from the teachers and toddlers of 1928.
Education is the engine of social mobility and equality. But that engine has been sputtering, especially for the children who need help the most. Minority and disadvantaged children are especially likely to be suspended from school and to drop out of college. Why? Is it something about the students or something about the schools? And what can we do about it?
Two recent studies published in the Proceedings of the National Academy of Sciences offer some hope. Just a few brief, inexpensive, online interventions significantly reduced suspension and dropout rates, especially for disadvantaged groups. That might seem surprising, but it reflects the insights of an important new psychological theory.
The psychologist Carol Dweck at Stanford has argued that both teachers and students have largely unconscious “mind-sets”—beliefs and expectations—about themselves and others and that these can lead to a cascade of self-fulfilling prophecies. A teacher may start out, for example, being just a little more likely to think that an African-American student will be a troublemaker. That makes her a bit more punitive in disciplining that student. The student, in turn, may start to think that he is being treated unfairly, so he reacts to discipline with more anger, thus confirming the teacher’s expectations. She reacts still more punitively, and so on. Without intending to, they can both end up stuck in a vicious cycle that greatly amplifies what were originally small biases.
In the same way, a student who is the first in her family to go to college may be convinced that she won’t be able to fit in socially or academically. When she comes up against the inevitable freshman hurdles, she interprets them as evidence that she is doomed to fail. And she won’t ask for help because she feels that would just make her weakness more obvious. She too ends up stuck in a vicious cycle.
Changing mind-sets is hard—simply telling people that they should think differently often backfires. The two new studies used clever techniques to get them to take on different mind-sets more indirectly. The studies are also notable because they used the gold-standard method of randomized, controlled trials, with over a thousand participants total.
In the first study, by Jason Okonofua, David Paunesku and Greg Walton at Stanford, the experimenters asked a group of middle-school math teachers to fill out a set of online materials at the start of school. The materials described vivid examples of how you could discipline students in a respectful, rather than a punitive, way.
But the most important part was a section that asked the teachers to provide examples of how they themselves used discipline respectfully. The researchers told the participants that those examples could be used to train others—treating the teachers as experts with something to contribute. Another group of math teachers got a control questionnaire about using technology in the classroom.
At the end of the school year, the teachers who got the first package had only half as many suspensions as the control group—a rate of 4.6% compared with 9.8%.
In the other study, by Dr. Dweck and her colleagues, the experimenters gave an online package to disadvantaged students from a charter school who were about to enter college. One group got materials saying that all new students had a hard time feeling that they belonged but that those difficulties could be overcome. The package also asked the students to write an essay describing how those challenges could be met—an essay that could help other students. A control group answered similar questions about navigating buildings on the campus.
Only 32% of the control group were still enrolled in college by the end of the year, but 45% of the students who got the mind-set materials were enrolled.
The researchers didn’t tell people to have a better attitude. They just encouraged students and teachers to articulate their own best impulses. That changed mind-sets—and changed lives.
One day in 2006, Paul Wagner donated one of his kidneys to a stranger with kidney failure. Not long before, he had been reading the paper on his lunch break at a Philadelphia company and saw an article about kidney donation. He clicked on the website and almost immediately decided to donate.
One day in 2008, Scott Johnson was sitting by a river in Michigan, feeling aggrieved at the world. He took out a gun and killed three teenagers who were out for a swim. He showed no remorse or guilt—instead, he talked about how other people were always treating him badly. In an interview, Mr. Johnson compared his killing spree to spilling a glass of milk.
These events were described in two separate, vivid articles in a 2009 issue of the New Yorker. Larissa MacFarquhar, who wrote about Mr. Wagner, went on to include him in her wonderful recent book about extreme altruists, “Strangers Drowning.”
For most of us, the two stories are so fascinating because they seem almost equally alien. It’s hard to imagine how someone could be so altruistic or so egotistic, so kind or so cruel.
The neuroscientist Abigail Marsh at Georgetown University started out studying psychopaths—people like Scott Johnson. There is good scientific evidence that psychopaths are very different from other kinds of criminals. In fact, many psychopaths aren’t criminals at all. They can be intelligent and successful and are often exceptionally charming and charismatic.
Psychopaths have no trouble understanding how other people’s minds work; in fact, they are often very good at manipulating people. But from a very young age, they don’t seem to respond to the fear or distress of others.
Psychopaths also show distinctive patterns of brain activity. When most of us see another person express fear or distress, the amygdala—a part of our brain that is important for emotion—becomes particularly active. That activity is connected to our immediate, intuitive impulse to help. The brains of psychopaths don’t respond to someone else’s fear or distress in the same way, and their amygdalae are smaller overall.
But we know much less about extreme altruists like Paul Wagner. So in a study with colleagues, published in 2014 in Proceedings of the National Academy of Sciences, Dr. Marsh looked at the brain activity of people who had donated a kidney to a stranger. Like Mr. Wagner, most of these people said that they had made the decision immediately, intuitively, almost as soon as they found out that it was possible.
The extreme altruists showed exactly the opposite pattern from the psychopaths: The amygdalae of the altruists were larger than normal, and they activated more in response to a face showing fear. The altruists were also better than typical people at detecting when another person was afraid.
These brain studies suggest that there is a continuum in how we react to other people, with the psychopaths on one end of the spectrum and the saints at the other. We all see the world from our own egotistic point of view, of course. The poet Philip Larkin once wrote: “Yours is the harder course, I can see. On the other hand, mine is happening to me.”
But for most of us, that perspective is extended to include at least some other people, though not all. We see fear or distress on the faces of those we love, and we immediately, intuitively, act to help. No one is surprised when a mother donates her kidney to her child.
The psychopath can’t seem to feel anyone’s needs except his own. The extreme altruist feels everybody’s needs. The rest of us live, often uneasily and guiltily, somewhere in the middle.
This year, in elections all across the country, individuals will compete for various positions of power. The one who gets more people to support him or her will prevail.
Democratic majority rule, the idea that the person with more supporters should win, may be a sophisticated and relatively recent political invention. But a new study in the Proceedings of the National Academy of Sciences suggests that the idea that the majority will win is much deeper and more fundamental to our evolution.
Andrew Scott Baron and colleagues at the University of British Columbia studied some surprisingly sophisticated political observers and prognosticators. It turns out that even 6-month-old babies predict that the guy with more allies will prevail in a struggle. They are pundits in diapers.
How could we possibly know this? Babies will look longer at something that is unexpected or surprising. Developmental researchers have exploited this fact in very clever ways to figure out what babies think. In the Scott Baron study, the experimenters showed 6- to 9-month-old babies a group of three green simplified cartoon characters and two blue ones (the colors were different on different trials).
Then they showed the babies a brief cartoon of one of the green guys and one of the blue guys trying to cross a platform that only had room for one character at a time, like Robin Hood and Little John facing off across a single log bridge. Which character would win and make it across the platform?
The babies looked longer when the blue guy won. They seemed to expect that the green guy, the guy with more buddies, would win, and they were surprised when the guy from the smaller group won instead.
In a 2011 study published in the journal Science, Susan Carey at Harvard and her colleagues found that 9-month-olds also think that might makes right: The babies expected that a physically bigger character would win out over a smaller one. But the new study showed that babies also think that allies are even more important than mere muscle. The green guy and the blue guy on the platform were the same size. And the green guy’s allies were actually a little smaller than the blue guy’s friends. But the babies still thought that the character who had two friends would win out over the character who had just one, even if those friends were a bit undersized.
What’s more, the babies only expected the big guys to win once they were about 9 months old. But they already thought the guy with more friends would win when they were just 6 months old.
This might seem almost incredible: Six-month-olds, after all, can’t sit up yet, let alone caucus or count votes. But the ability may make evolutionary sense. Chimpanzees, our closest primate relatives, have sophisticated political skills. A less powerful chimp who calls on several other chimps for help can overthrow even the most ferociously egocentric silverback. Our human ancestors made alliances, too. It makes sense that even young babies are sensitive to the size of social groups and the role they play in power.
We often assume that politics is a kind of abstract negotiation between autonomous individual interests—voters choose candidates because they think those candidates will enact the policies they want. But the new studies of the baby pundits suggest a different picture. Alliance and dominance may be more fundamental human concepts than self-interest and negotiation. Even grown-up voters may be thinking more about who belongs to what group, or who is top dog, than who has the best health-care plan or tax scheme.
Every year on the website Edge, scientists and other thinkers reply to one question. This year it’s “What do you consider the most interesting recent news” in science? The answers are fascinating. We’re used to thinking of news as the events that happen in a city or country within a few weeks or months. But scientists expand our thinking to the unimaginably large and the infinitesimally small.
Despite this extraordinary range, the answers of the Edge contributors have an underlying theme. The biggest news of all is that a handful of large-brained primates on an insignificant planet have created machines that let them understand the world, at every scale, and let them change it too, for good or ill.
Here is just a bit of the scientific news. The Large Hadron Collider—the giant particle accelerator in Geneva—is finally fully functional. So far the new evidence from the LHC has mostly just confirmed the standard model of physics, which helps explain everything from the birth of time to the end of the world. But at the tiny scale of the basic particles it is supposed to investigate, the Large Hadron Collider has detected a small blip—something really new may just be out there.
Our old familiar solar system, though, has turned out to be full of surprises. Unmanned spacecraft have discovered that the planets surrounding us are more puzzling, peculiar and dynamic than we would ever have thought. Mars once had water. Pluto, which was supposed to be an inert lump, like the moon, turns out to be a dynamic planet full of glaciers of nitrogen.
On our own planet, the big, disturbing news is that the effects of carbon on climate change are ever more evident and immediate. The ice sheets are melting, sea levels are rising, and last year was almost certainly the warmest on record. Our human response is achingly slow in contrast.
When it comes to all the living things that inhabit that planet, the big news is the new Crispr gene-editing technology. The technique means that we can begin to rewrite the basic genetic code of all living beings—from mosquitoes to men.
The news about our particular human bodies and their ills is especially interesting. The idea that tiny invisible organisms make us sick was one of the great triumphs of the scientific expansion of scale. But new machines that detect the genetic signature of bacteria have shown that those invisible germs—the “microbiome”—aren’t really the enemy. In fact, they’re essential to keeping us well, and the great lifesaving advance of antibiotics comes with a cost.
The much more mysterious action of our immune system is really the key to human health, and that system appears to play a key role in everything from allergies to obesity to cancer.
If new technology is helping us to understand and mend the human body, it is also expanding the scope of the human mind. We’ve seen lots of media coverage about artificial intelligence over the past year, but the basic algorithms are not really new. The news is the sheer amount of data and computational power that is available.
Still, even if those advances are just about increases in data and computing power, they could profoundly change how we interact with the world. In my own contribution to answering the Edge question, I talked about the fact that toddlers are starting to interact with computers and that the next generation will learn about computers in a radically new way.
From the Large Hadron Collider to the Mars Rover, from Crispr to the toddler’s iPad, the news is that technologies let us master the universe and ourselves and reshape the planet. What we still don’t know is whether, ultimately, these developments are good news or bad.
It’s midnight on Halloween. You walk through a deserted graveyard as autumn leaves swirl around your feet. Suddenly, inexplicably and yet with absolute certainty, you feel an invisible presence by your side. Could it be a ghost? A demon? Or is it just an asynchrony in somato-sensory motor integration in the frontoparietal cortex?
A 2014 paper in the journal Current Biology by Olaf Blanke at the University Hospital of Geneva and his colleagues supports the last explanation. For millennia people have reported vividly experiencing an invisible person nearby. The researchers call it a “feeling of presence.” It can happen to any of us: A Pew research poll found that 18% of Americans say they have experienced a ghost.
But patients with particular kinds of brain damage are especially likely to have this experience. The researchers found that specific areas of these patients’ frontoparietal cortex were damaged—the same brain areas that let us sense our own bodies.
Those results suggested that the mysterious feeling of another presence might be connected to the equally mysterious feeling of our own presence—that absolute certainty that there is an “I” living inside my body. The researchers decided to try to create experimentally the feeling of presence. Plenty of people without evident brain damage say they have felt a ghost was present. Could the researchers systematically make ordinary people experience a disembodied spirit?
They tested 50 ordinary, healthy volunteers. In the experiment, you stand between two robots and touch the robot in front of you with a stick. That “master” robot sends signals that control the second “slave” robot behind you. The slave robot reproduces your movements and uses them to control another stick that strokes your back. So you are stroking something in front of you, but you feel those same movements on your own back. The result is a very strong sense that somehow you are touching your own back, even though you know that’s physically impossible. The researchers have manipulated your sense of where your self begins and ends.
Then the researchers changed the set-up just slightly. Now the slave robot touches your back half a second after you touch the master robot, so there is a brief delay between what you do and what you feel. Now people in the experiment report a “feeling of presence”: They say that somehow there is an invisible ghostly person in the room, even though that is also physically impossible.
If we put that result together with the brain-damage studies, it suggests an intriguing possibility. When we experience ghosts and spirits, angels and demons, we are really experiencing a version of ourselves. Our brains construct a picture of the “I” peering out of our bodies, and if something goes slightly wrong in that process—because of brain damage, a temporary glitch in normal brain processing or the wiles of an experimenter—we will experience a ghostly presence instead.
So, in the great “Scooby-Doo” tradition, we’ve cleared up the mystery, right? The ghost turned out just to be you in disguise? Not quite. All good ghost stories have a twist, what Henry James called “The Turn of the Screw.” The ghost in the graveyard was just a creation of your brain. But the “you” who met the ghost was also just the creation of your brain. In fact, the same brain areas that made you feel someone else was there are the ones that made you feel that you were there too.
If you’re a good, hard-headed scientist, it’s easy to accept that the ghost was just a Halloween illusion, fading into the mist and leaves. But what about you, that ineffable, invisible self who inhabits your body and peers out of your eyes? Are you just a frontoparietal ghost too? Now that’s a really scary thought.
This summer my 93-year-old mother-in-law died, a few months after her 94–year-old husband. For the last five years she had suffered from Alzheimer’s disease. By the end she had forgotten almost everything, even her children’s names, and had lost much of what defined her—her lively intelligence, her passion for literature and history.
Still, what remained was her goodness, a characteristic warmth and sweetness that seemed to shine even more brightly as she grew older. Alzheimer’s can make you feel that you’ve lost the person you loved, even though they’re still alive. But for her children, that continued sweetness meant that, even though her memory and intellect had gone, she was still Edith.
A new paper in Psychological Science reports an interesting collaboration between the psychologist Nina Strohminger at Yale University and the philosopher Shaun Nichols at the University of Arizona. Their research suggests that Edith was an example of a more general and rather surprising principle: Our identity comes more from our moral character than from our memory or intellect.
Neurodegenerative diseases like Alzheimer’s make especially vivid a profound question about human nature. In the tangle of neural connections that make up my brain, where am I? Where was Edith? When those connections begin to unravel, what happens to the person?
Many philosophers have argued that our identity is rooted in our continuous memories or in our accumulated knowledge. Drs. Strohminger and Nichols argue instead that we identify people by their moral characteristics, their gentleness or kindness or courage—if those continue, so does the person. To test this idea the researchers compared different kinds of neurodegenerative diseases in a 248-participant study. They compared Alzheimer’s patients to patients who suffer from fronto-temporal dementia, or FTD.
FTD is the second most common type of dementia after Alzheimer’s, though it affects far fewer people and usually targets a younger age group. Rather than attacking the memory areas of the brain, it damages the frontal control areas. These areas are involved in impulse control and empathy—abilities that play a particularly important role in our moral lives.
As a result, patients may change morally even though they retain memory and intellect. They can become indifferent to other people or be unable to control the impulse to be rude. They may even begin to lie or steal.
Finally, the researchers compared both groups to patients with amyotrophic lateral sclerosis, or ALS, who gradually lose motor control but not other capacities. (Physicist Stephen Hawking suffers from ALS.)
The researchers asked spouses or children caring for people with these diseases to fill out a questionnaire about how the patients had changed, including changes in memory, cognition and moral behavior. They also asked questions like, “How much do you sense that the patient is still the same person underneath?” or, “Do you feel like you still know who the patient is?”
The researchers found that the people who cared for the FTD patients were much more likely to feel that they had become different people than the caregivers of the Alzheimer’s patients. The ALS caregivers were least likely to feel that the patient had become a different person. What’s more, a sophisticated statistical analysis showed that this was the effect of changes in the patient’s moral behavior in particular. Across all three groups, changes in moral behavior predicted changes in perceived identity, while changes in memory or intellect did not.
These results suggest something profound. Our moral character, after all, is what links us to other people. It’s the part of us that goes beyond our own tangle of neurons to touch the brains and lives of others. Because that moral character is central to who we are, there is a sense in which Edith literally, and not just metaphorically, lives on in the people who loved her.
Walk into any preschool classroom and you’ll see that some 4-year-olds are always getting into fights—while others seldom do, no matter the provocation. Even siblings can differ dramatically—remember Cain and Abel. Is it nature or nurture that causes these deep differences in aggression?
The new techniques of genomics—mapping an organism’s DNA and analyzing how it works—initially led people to think that we might find a gene for undesirable individual traits like aggression. But from an evolutionary point of view, the very idea that a gene can explain traits that vary so dramatically is paradoxical: If aggression is advantageous, why didn’t the gene for aggression spread more widely? If it’s harmful, why would the gene have survived at all?
Two new studies suggest that the relationship between genes and aggression is more complicated than a mere question of nature vs. nurture. And those complications may help to resolve the evolutionary paradox.
In earlier studies, researchers looked at variation in a gene involved in making brain chemicals. Children with a version of the gene called VAL were more likely to become aggressive than those with a variation called MET. But this only happened if the VAL children also experienced stressful events like abuse, violence or illness. So it seemed that the VAL version of the gene made the children more vulnerable to stress, while the MET version made them more resilient.
A study published last month in the journal Developmental Psychology, by Beate Hygen and colleagues from the Norway University of Science and Technology and Jay Belsky of U.C. Davis, found that the story was even more complicated. They analyzed the genes of hundreds of Norwegian 4-year-olds. They also got teachers to rate how aggressive the children were and parents to record whether the children had experienced stressful life events.
As in the earlier studies, the researchers found that children with the VAL variant were more aggressive when they were subjected to stress. But they also found something else: When not subjected to stress, these children were actually less aggressive than the MET children.
Dr. Belsky has previously used the metaphor of orchids and dandelions to describe types of children. Children with the VAL gene seem to be more sensitive to the environment, for good and bad, like orchids that can be magnificent in some environments but wither in others. The MET children are more like dandelions, coming out somewhere in the middle no matter the conditions.
Dr. Belsky has suggested that this explanation for individual variability can help to resolve the evolutionary puzzle. Including both orchids and dandelions in a group of children gives the human race a way to hedge its evolutionary bets. A study published online in May in the journal Developmental Science, by Dr. Belsky with Willem Frankenhuis and Karthik Panchanathan, used mathematical modeling to explore this idea more precisely.
If a species lives in a predictable, stable environment, then it would be adaptive for their behavior to fit that environment as closely as possible. But suppose you live in an environment that changes unpredictably. In that case, you might want to diversify your genetic portfolio. Investing in dandelions is like putting your money in bonds: It’s safe and reliable and will give you a constant, if small, return in many conditions.
Investing in orchids is higher risk, but it also promises higher returns. If conditions change, then the orchids will be able to change with them. Being mean might sometimes pay off, but only when times are tough. Cooperation will be more valuable when resources are plentiful. The risk is that the orchids may get it wrong—a few stressful early experiences might make a child act as if the world is hard, even when it isn’t. In fact, the model showed that when environments change substantially over time, a mix of orchids and dandelions is the most effective strategy.
We human beings perpetually redesign our living space and social circumstances. By its very nature, our environment is unpredictable. That may be why every preschool class has its mix of the sensitive and the stolid.
A fifth or more of American children grow up in poverty, with the situation worsening since 2000, according to census data. At the same time, as education researcher Sean Reardon has pointed out, an “income achievement gap” is widening: Low-income children do much worse in school than higher-income children.
Since education plays an ever bigger role in how much we earn, a cycle of poverty is trapping more American children. It’s hard to think of a more important project than understanding how this cycle works and trying to end it.
Neuroscience can contribute to this project. In a new study in Psychological Science, John Gabrieli at the Massachusetts Institute of Technology and his colleagues used imaging techniques to measure the brains of 58 14-year-old public school students. Twenty-three of the children qualified for free or reduced-price lunch; the other 35 were middle-class.
The scientists found consistent brain differences between the two groups. The researchers measured the thickness of the cortex—the brain’s outer layer—in different brain areas. The low-income children had developed thinner cortices than the high-income children.
The low-income group had more ethnic and racial minorities, but statistical analyses showed that ethnicity and race were not associated with brain thickness, although income was. Children with thinner cortices also tended to do worse on standardized tests than those with thicker ones. This was true for high-income as well as low-income children.
Of course, just finding brain differences doesn’t tell us much. By definition, something about the brains of the children must be different, since their behavior on the tests varies so much. But finding this particular brain difference at least suggests some answers.
The brain is the most complex system on the planet, and brain development involves an equally complex web of interactions between genes and the physical, social and intellectual environment. We still have much to learn.
But we do know that the brain is, as neuroscientists say, plastic. The process of evolution has designed brains to be shaped by the outside world. That’s the whole point of having one. Two complementary processes play an especially important role in this shaping. In one process, what neuroscientists call “proliferation,” the brain makes many new connections between neurons. In the other process, “pruning,” some existing connections get stronger, while others disappear. Experience heavily influences both proliferation and pruning.
Early in development, proliferation prevails. Young children make many more new connections than adults do. Later in development, pruning grows in importance. Humans shift from a young brain that is flexible and good at learning, to an older brain that is more effective and efficient, but more rigid. A change in the thickness of the cortex seems to reflect this developmental shift. While in childhood the cortex gradually thickens, in adolescence this process is reversed and the cortex gets thinner, probably because of pruning.
We don’t know whether the low-income 14-year-olds in this study failed to grow thicker brains as children, or whether they shifted to thinner brains more quickly in adolescence.
There are also many differences in the experiences of low-income and high-income children, aside from income itself—differences in nutrition, stress, learning opportunities, family structure and many more. We don’t know which of these differences led to the differences in cortical thickness.
But we can find some hints from animal studies. Rats raised in enriched environments, with lots of things to explore and opportunities to learn, develop more neural connections. Rats subjected to stress develop fewer connections. Some evidence exists that stress also makes animals grow up too quickly, even physically, with generally bad effects. And nutrition influences brain development in all animals.
The important point, and the good news, is that brain plasticity never ends. Brains can be changed throughout life, and we never entirely lose the ability to learn and change. But, equally importantly, childhood is the time of the greatest opportunity, and the greatest risk. We lose the potential of millions of young American brains every day.
Watch a 1-year-old baby carefully for a while, and count how many experiments you see. When Georgiana, my 17-month-old granddaughter, came to visit last weekend, she spent a good 15 minutes exploring the Easter decorations—highly puzzling, even paradoxical, speckled Styrofoam eggs. Are they like chocolate eggs or hard-boiled eggs? Do they bounce? Will they roll? Can you eat them?
Some of my colleagues and I have argued for 20 years that even the youngest children learn about the world in much the way that scientists do. They make up theories, analyze statistics, try to explain unexpected events and even do experiments. When I write for scholarly journals about this “theory theory,” I talk about it very abstractly, in terms of ideas from philosophy, computer science and evolutionary biology.
But the truth is that, at least for me, personally, watching Georgie is as convincing as any experiment or argument. I turn to her granddad and exclaim “Did you see that? It’s amazing! She’s destined to be an engineer!” with as much pride and astonishment as any nonscientist grandma. (And I find myself adding, “Can you imagine how cool it would be if your job was to figure out what was going on in that little head?” Of course, that is supposed to be my job—but like everyone else in the information economy, it often feels like all I ever actually do is answer e-mail.)
Still, the plural of anecdote is not data, and fond grandma observations aren’t science. And while guessing what babies think is easy and fun, proving it is really hard and takes ingenious experimental techniques.
In an amazingly clever new paper in the journal Science, Aimee Stahl and Lisa Feigenson at Johns Hopkins University show systematically that 11-month-old babies, like scientists, pay special attention when their predictions are violated, learn especially well as a result, and even do experiments to figure out just what happened.
They took off from some classic research showing that babies will look at something longer when it is unexpected. The babies in the new study either saw impossible events, like the apparent passage of a ball through a solid brick wall, or straightforward events, like the same ball simply moving through an empty space. Then they heard the ball make a squeaky noise. The babies were more likely to learn that the ball made the noise when the ball had passed through the wall than when it had behaved predictably.
In a second experiment, some babies again saw the mysterious dissolving ball or the straightforward solid one. Other babies saw the ball either rolling along a ledge or rolling off the end of the ledge and apparently remaining suspended in thin air. Then the experimenters simply gave the babies the balls to play with.
The babies explored objects more when they behaved unexpectedly. They also explored them differently depending on just how they behaved unexpectedly. If the ball had vanished through the wall, the babies banged the ball against a surface; if it had hovered in thin air, they dropped it. It was as if they were testing to see if the ball really was solid, or really did defy gravity, much like Georgie testing the fake eggs in the Easter basket.
In fact, these experiments suggest that babies may be even better scientists than grown-ups often are. Adults suffer from “confirmation bias”—we pay attention to the events that fit what we already know and ignore things that might shake up our preconceptions. Charles Darwin famously kept a special list of all the facts that were at odds with his theory, because he knew he’d otherwise be tempted to ignore or forget them.
Babies, on the other hand, seem to have a positive hunger for the unexpected. Like the ideal scientists proposed by the philosopher of science Karl Popper, babies are always on the lookout for a fact that falsifies their theories. If you want to learn the mysteries of the universe, that great, distinctively human project, keep your eye on those weird eggs.
We learn to be afraid. One of the oldest discoveries in psychology is that rats will quickly learn to avoid a sound or a smell that has been associated with a shock in the past—they not only fear the shock, they become scared of the smell, too.
A paper by Nim Tottenham of the University of California, Los Angeles in “Current Topics in Behavioral Neurosciences” summarizes recent research on how this learned fear system develops, in animals and in people. Early experiences help shape the fear system. If caregivers protect us from danger early in life, this helps us to develop a more flexible and functional fear system later. Dr. Tottenham argues, in particular, that caring parents keep young animals from prematurely developing the adult system: They let rat pups be pups and children be children.
Of course, it makes sense to quickly learn to avoid events that have led to danger in the past. But it can also be paralyzing. There is a basic paradox about learning fear. Because we avoid the things we fear, we can’t learn anything more about them. We can’t learn that the smell no longer leads to a shock unless we take the risk of exploring the dangerous world.
Many mental illnesses, from general anxiety to phobias to posttraumatic-stress syndrome, seem to have their roots in the way we learn to be afraid. We can learn to be afraid so easily and so rigidly that even things that we know aren’t dangerous—the benign spider, the car backfire that sounds like a gunshot—can leave us terrified. Anxious people end up avoiding all the things that just might be scary, and that leads to an increasingly narrow and restricted life and just makes the fear worse. The best treatment is to let people “unlearn” their fears—gradually exposing them to the scary cause and showing them that it doesn’t actually lead to the dangerous effect.
Neuroscientists have explored the biological basis for this learned fear. It involves the coordination between two brain areas. One is the amygdala, an area buried deep in the brain that helps produce the basic emotion of fear, the trembling and heart-pounding. The other is the prefrontal cortex, which is involved in learning, control and planning.
Regina Sullivan and her colleagues at New York University have looked at how rats develop these fear systems. Young rats don’t learn to be fearful the way that older rats do, and their amygdala and prefrontal systems take a while to develop and coordinate. The baby rats “unlearn” fear more easily than the adults, and they may even approach and explore the smell that led to the shock, rather than avoid it.
If the baby rats are periodically separated from their mothers, however, they develop the adult mode of fear and the brain systems that go with it more quickly. This early maturity comes at a cost. Baby rats who are separated from their mothers have more difficulties later on, difficulties that parallel human mental illness.
Dr. Tottenham and her colleagues found a similar pattern in human children. They looked at children who had grown up in orphanages in their first few years of life but then were adopted by caring parents. When they looked at the children’s brains with functional magnetic resonance imaging, they found that, like the rats, these children seemed to develop adultlike “fear circuits” more quickly. Their parents were also more likely to report that the children were anxious. The longer the children had stayed in the orphanages, the more their fear system developed abnormally, and the more anxious they were.
The research fits with a broader evolutionary picture. Why does childhood exist at all? Why do people, and rats, put so much effort into protecting helpless babies? The people who care for children give them a protected space to figure out just how to cope with the dangerous adult world. Care gives us courage; love lets us learn.
Scientists have largely given up the idea of “innate talent,” as I said in my last column. This change might seem implausible and startling. We all know that some people are better than others at doing some things. And we all know that genes play a big role in shaping our brains. So why shouldn’t genes determine those differences?
Biologists talk about the relationship between a “genotype,” the information in your DNA, and a “phenotype,” the characteristics of an adult organism. These relationships turn out to be so complicated that parceling them out into percentages of nature and nurture is impossible. And, most significantly, these complicated relationships can change as environments change.
For example, Michael Meaney at McGill University has discovered “epigenetic” effects that allow nurture to reshape nature. Caregiving can turn genes on and off and rewire brain areas. In a 2000 study published in Nature Neuroscience he and colleagues found that some rats were consistently better at solving mazes than others. Was this because of innate maze-solving genes? These smart rats, it turned out, also had more attentive mothers. The researchers then “cross-fostered” the rat pups: They took the babies of inattentive mothers, who would usually not be so good at maze-solving, and gave them to the attentive mothers to raise, and vice versa. If the baby rats’ talent was innate, this should make no difference. If it wasn’t, it should make all the difference.
In fact, the inattentive moms’ babies who were raised by the attentive moms got smart, but the opposite pattern didn’t hold. The attentive moms’ babies stayed relatively smart even when they were raised by the inattentive moms. So genetics prevailed in the poor environment, but environment prevailed in the rich one. So was maze-solving innate or not? It turns out that it’s not the right question.
To study human genetics, researchers can compare identical and fraternal twins. Early twin studies found that IQ was “heritable”—identical twins were more similar than fraternal ones. But these studies looked at well-off children. Eric Turkheimer at the University of Virginialooked at twins in poor families and found that IQ was much less “heritable.” In the poor environment, small differences in opportunity swamped any genetic differences. When everyone had the same opportunities, the genetic differences had more effect. So is IQ innate or not? Again, the wrong question.
If you only studied rats this might be just academic. After all, rats usually are raised by their biological mothers. But the most important innate feature of human beings is our ability to transform our physical and social environments. Alone among animals, we can envision an unprecedented environment that might help us thrive, and make that environment a reality. That means we simply don’t know what the relationship between genes and environment will look like in the future.
Take IQ again. James Flynn, at New Zealand’s University of Otago, and others have shown that absolute IQ scores have been steadily and dramatically increasing, by as much as three points a decade. (The test designers have to keep making the questions harder to keep the average at 100).
The best explanation is that we have consciously transformed our society into a world where schools are ubiquitous. So even though genes contribute to whatever IQ scores measure, IQ can change radically as a result of changes in environment. Abstract thinking and a thirst for knowledge might once have been a genetic quirk. In a world of schools, they become the human inheritance.
Thinking in terms of “innate talent” often leads to a kind of fatalism: Because right now fewer girls than boys do well at math, the assumption is that this will always be the case. But the actual science of genes and environment says just the opposite. If we want more talented children, we can change the world to create them.
Every January the intellectual impresario and literary agent John Brockman (who represents me, I should disclose) asks a large group of thinkers a single question on his website, edge.org. This year it is: “What do you think about machines that think?” There are lots of interesting answers, ranging from the skeptical to the apocalyptic.
I’m not sure that asking whether machines can think is the right question, though. As someone once said, it’s like asking whether submarines can swim. But we can ask whether machines can learn, and especially, whether they can learn as well as 3-year-olds.
Everyone knows that Alan Turing helped to invent the very idea of computation. Almost no one remembers that he also thought that the key to intelligence would be to design a machine that was like a child, not an adult. He pointed out, presciently, that the real secret to human intelligence is our ability to learn.
The history of artificial intelligence is fascinating because it has been so hard to predict what would be easy or hard for a computer. At first, we thought that things like playing chess or proving theorems—the bullfights of nerd machismo—would be hardest. But they turn out to be much easier than recognizing a picture of a cat or picking up a cup. And it’s actually easier to simulate a grandmaster’s gambit than to mimic the ordinary learning of every baby.
Recently, machine learning has helped computers to do things that were impossible before, like labeling Internet images accurately. Techniques like “deep learning” work by detecting complicated and subtle statistical patterns in a set of data.
But this success isn’t due to the fact that computers have suddenly developed new powers. The big advance is that, thanks to the Internet, they can apply these statistical techniques to enormous amounts of data—data that were predigested by human brains.
Computers can recognize Internet images only because millions of real people have sorted out the unbelievably complex information received by their retinas and labeled the images they post online—like, say, Instagrams of their cute kitty. The dystopian nightmare of “The Matrix” is now a simple fact: We’re all serving Google ’s computers, under the anesthetizing illusion that we’re just having fun with LOLcats.
The trouble with this sort of purely statistical machine learning is that you can only generalize from it in a limited way, whether you’re a baby or a computer or a scientist. A more powerful way to learn is to formulate hypotheses about what the world is like and to test them against the data. One of the other big advances in machine learning has been to automate this kind of hypothesis-testing. Machines have become able to formulate hypotheses and test them against data extremely well, with consequences for everything from medical diagnoses to meteorology.
The really hard problem is deciding which hypotheses, out of all the infinite possibilities, are worth testing. Preschoolers are remarkably good at creating brand new, out-of-the-box creative concepts and hypotheses in a way that computers can’t even begin to match.
Preschoolers are also remarkably good at creating chaos and mess, as all parents know, and that may actually play a role in their creativity. Turing presciently argued that it might be good if his child computer acted randomly, at least some of the time. The thought processes of three-year-olds often seem random, even crazy. But children have an uncanny ability to zero in on the right sort of weird hypothesis—in fact, they can be substantially better at this than grown-ups. We have almost no idea how this sort of constrained creativity is possible.
There are, indeed, amazing thinking machines out there, and they will unquestionably far surpass our puny minds and eventually take over the world. We call them our children.
As we wade through the towers of presents and the mountains of torn wrapping paper, and watch the children’s shining, joyful faces and occasional meltdowns, we may find ourselves speculating—in a detached, philosophical way—about generosity and greed. That’s how I cope, anyway.
Are we born generous and then learn to be greedy? Or is it the other way round? Do immediate intuitive impulses or considered reflective thought lead to generosity? And how could we possibly tell?
Recent psychological research has weighed in on the intuitive-impulses side. People seem to respond quickly and perhaps even innately to the good and bad behavior of others. Researchers like Kiley Hamlin at the University of British Columbia have shown that even babies prefer helpful people to harmful ones. And psychologists like Jonathan Haidt at New York University’s Stern School of Business have argued that even adult moral judgments are based on our immediate emotional reactions—reflection just provides the after-the-fact rationalizations.
But some new studies suggest it’s more complicated. Jason Cowell and Jean Decety at the University of Chicago explored this question in the journal Current Biology. They used electroencephalography, or EEG, to monitor electrical activity in children’s brains. Their study had two parts. In the first part, the researchers recorded the brain waves of 3-to-5-year-olds as they watched cartoons of one character either helping or hurting another.
The children’s brains reacted differently to the good and bad scenarios. But they did so in two different ways. One brain response, the EPN, was quick, another, the LPP, was in more frontal parts of the brain and was slower. In adults, the EPN is related to automatic, instinctive reactions while the LPP is connected to more purposeful, controlled and reflective thought.
In the second part of the study, the experimenters gave the children a pile of 10 stickers and told them they could keep them all themselves or could give some of them to an anonymous child who would visit the lab later in the day. Some children were more generous than others. Then the researchers checked to see which patterns of brain activity predicted the children’s generosity.
They found that the EPN—the quick, automatic, intuitive reaction—didn’t predict how generous the children were later on. But the slow, thoughtful LPP brain wave did. Children who showed more of the thoughtful brain activity when they saw the morally relevant cartoons also were more likely to share later on.
Of course, brain patterns are complicated and hard to interpret. But this study at least suggests an interesting possibility. There are indeed quick and automatic responses to help and to harm, and those responses may play a role in our moral emotions. But more reflective, complex and thoughtful responses may play an even more important role in our actions, especially actions like deciding to share with a stranger.
Perhaps this perspective can help to resolve some of the Christmas-time contradictions, too. We might wish that the Christmas spirit would descend on us and our children as simply and swiftly as the falling snow. But perhaps it’s the very complexity of the season, that very human tangle of wanting and giving, joy and elegy, warmth and tension, that makes Christmas so powerful, and that leads even children to reflection, however gently. Scrooge tells us about both greed and generosity, Santa’s lists reflect both justice and mercy, the Magi and the manger represent both abundance and poverty.
And, somehow, at least in memory, Christmas generosity always outweighs the greed, the joys outlive the disappointments. Even an unbeliever like me who still deeply loves Christmas can join in the spirit of Scrooge’s nephew Fred, “Though it has never put a scrap of gold or silver in my pocket [or, I would add, an entirely uncomplicated intuition of happiness in my brain], I believe that Christmas has done me good, and will do me good, and, I say, God bless it!”
The eyes are windows to the soul. What could be more obvious? I look through my eyes onto the world, and I look through the eyes of others into their minds.
We immediately see the tenderness and passion in a loving gaze, the fear and malice in a hostile glance. In a lecture room, with hundreds of students, I can pick out exactly who is, and isn’t, paying attention. And, of course, there is the electricity of meeting a stranger’s glance across a crowded room.
But wait a minute, eyes aren’t windows at all. They’re inch-long white and black and colored balls of jelly set in holes at the top of a skull. How could those glistening little marbles possibly tell me about love or fear or attention?
A new study in the Proceedings of the National Academy of Science by Sarah Jessen of the Max Planck Institute and Tobias Grossmann of the University of Virginia, suggests that our understanding of eyes runs very deep and emerges very early.
Human eyes have much larger white areas than the eyes of other animals and so are easier to track. When most people, including tiny babies, look at a face, they concentrate on the eyes. People with autism, who have trouble understanding other minds, often don’t pay attention to eyes in the same way, and they have trouble meeting or following another person’s gaze. All this suggests that we may be especially adapted to figure out what our fellow humans see and feel from their eyes.
If that’s true, even very young babies might detect emotions from eyes, and especially eye whites. The researchers showed 7-month-old babies schematic pictures of eyes. The eyes could be fearful or neutral; the clue to the emotion was the relative position of the eye-whites. (Look in the mirror and raise your eyelids until the white area on top of the iris is visible—then register the look of startled fear on your doppelgänger in the reflection.)
The fearful eyes could look directly at the baby or look off to one side. As a comparison, the researchers also gave the babies exactly the same images to look at but with the colors reversed, so that the whites were black.
They showed the babies the images for only 50 milliseconds, too briefly even to see them consciously. They used a technique called Event-Related Brain Potentials, or ERP, to analyze the babies’ brain-waves.
The babies’ brain-waves were different when they looked at the fearful eyes and the neutral ones, and when they saw the eyes look right at them or off to one side. The differences were particularly clear in the frontal parts of the brain. Those brain areas control attention and are connected to the brain areas that detect fear.
When the researchers showed the babies the reversed images, their brains didn’t differentiate between them. So they weren’t just responding to the visual complexity of the images—they seemed to recognize that there was something special about the eye-whites.
So perhaps the eyes are windows to the soul. After all, I think that I just look out and directly see the table in front of me. But, in fact, my brain is making incredibly complex calculations that accurately reconstruct the shape of the table from the patterns of light that enter my eyeballs. My baby granddaughter Georgiana’s brain, nestled in the downy head on my lap, does the same thing.
The new research suggests that my brain also makes my eyes move in subtle ways that send out complex signals about what I feel and see. And, as she gazes up at my face, Georgie’s brain interprets those signals and reconstructs the feelings that caused them. She really does see the soul behind my eyes, as clearly as she sees the table in front of them.
Laurence Steinberg calls his authoritative new book on the teenage mind “Age of Opportunity.” Most parents think of adolescence, instead, as an age of crisis. In fact, the same distinctive teenage traits can lead to either triumph or disaster.
On the crisis side, Dr. Steinberg outlines the grim statistics. Even though teenagers are close to the peak of strength and health, they are more likely to die in accidents, suicides and homicides than younger or older people. And teenagers are dangerous to others as well. Study after study shows that criminal and antisocial behavior rises precipitously in adolescence and then falls again.
Why? What happens to transform a sane, sober, balanced 8-year-old into a whirlwind of destruction in just a few years? And why do even smart, thoughtful, good children get into trouble?
It isn’t because teenagers are dumb or ignorant. Studies show that they understand risks and predict the future as well as adults do. Dr. Steinberg wryly describes a public service campaign that tried to deter unprotected sex by explaining that children born to teenage parents are less likely to go to college. The risk to a potential child’s educational future is not very likely to slow down two teenagers making out on the couch.
Nor is it just that teenagers are impulsive; the ability for self-control steadily develops in the teen years, and adolescents are better at self-control than younger children. So why are they so much more likely to act destructively?
Dr. Steinberg and other researchers suggest that the crucial change involves sensation-seeking. Teenagers are much more likely than either children or adults to seek out new experiences, rewards and excitements, especially social experiences.
Some recent studies by Kathryn Harden at the University of Texas at Austin and her colleagues in the journal Developmental Science support this idea. They analyzed a very large study that asked thousands of adolescents the same questions over the years, as they grew up. Some questions measured impulsiveness (“I have to use a lot of self-control to stay out of trouble”), some sensation-seeking (“I enjoy new and exciting experiences even if they are a little frightening or unusual . . .”) and some delinquency (“I took something from a store without paying for it”).
Impulsivity and sensation-seeking were not closely related to one another. Self-control steadily increased from childhood to adulthood, while sensation-seeking went up sharply and then began to decline. It was the speed and scope of the increase in sensation-seeking that predicted whether the teenagers would break the rules later on.
But while teenage sensation-seeking can lead to trouble, it can also lead to some of the most important advances in human culture. Dr. Steinberg argues that adolescence is a time when the human brain becomes especially “plastic,” particularly good at learning, especially about the social world. Adolescence is a crucial period for human innovation and exploration.
Sensation-seeking helped teenagers explore and conquer the literal jungles in our evolutionary past—and it could help them explore and conquer the metaphorical Internet jungles in our technological future. It can lead young people to explore not only new hairstyles and vocabulary, but also new kinds of politics, art, music and philosophy.
So how can worried parents ensure that their children’s explorations come out well rather than badly? A very recent study by Dr. Harden’s group provides a bit of solace. The relationship between sensation-seeking and delinquency was moderated by two other factors: the teenager’s friends and the parents’ knowledge of the teen’s activities. When parents kept track of where their children were and whom they were with, sensation-seeking was much less likely to be destructive. Asking the old question, “Do you know where your children are?” may be the most important way to make sure that adolescent opportunities outweigh the crises.
From the inside, nothing in the world feels more powerful than our impulse to care for helpless children. But new research shows that caring for children may actually be even more powerful than it feels. It may not just influence children's lives—it may even shape their genes.
As you might expect, the genomic revolution has completely transformed the nature/nurture debate. What you might not expect is that it has shown that nurture is even more important than we thought. Our experiences, especially our early experiences, don't just interact with our genes, they actually make our genes work differently.
This might seem like heresy. After all, one of the first things we learn in Biology 101 is that the genes we carry are determined the instant we are conceived. And that's true.
But genes are important because they make cells, and the process that goes from gene to cell is remarkably complex. The genes in a cell can be expressed differently—they can be turned on or off, for example—and that makes the cells behave in completely different ways. That's how the same DNA can create neurons in your brain and bone cells in your femur. The exciting new field of epigenetics studies this process.
One of the most important recent discoveries in biology is that this process of translating genes into cells can be profoundly influenced by the environment.
In a groundbreaking 2004 Nature paper, Michael Meaney at McGill University and his colleagues looked at a gene in rats that helps regulate how an animal reacts to stress. A gene can be "methylated" or "demethylated"—a certain molecule does or doesn't attach to the gene. This changes the way that the gene influences the cell.
In carefully controlled experiments Dr. Meaney discovered that early caregiving influenced how much the stress-regulating gene was methylated. Rats who got less nuzzling and licking from their mothers had more methylated genes. In turn, the rats with the methylated gene were more likely to react badly to stress later on. And these rats, in turn, were less likely to care for their own young, passing on the effect to the next generation.
The scientists could carefully control every aspect of the rats' genes and environment. But could you show the same effect in human children, with their far more complicated brains and lives? A new study by Seth Pollak and colleagues at the University of Wisconsin at Madison in the journal Child Development does just that. They looked at adolescents from vulnerable backgrounds, and compared the genes of children who had been abused and neglected to those who had not.
Sure enough, they found the same pattern of methylation in the human gene that is analogous to the rat stress-regulating gene. Maltreated children had more methylation than children who had been cared for. Earlier studies show that abused and neglected children are more sensitive to stress as adults, and so are more likely to develop problems like anxiety and depression, but we might not have suspected that the trouble went all the way down to their genes.
The researchers also found a familiar relationship between the socio-economic status of the families and the likelihood of abuse and neglect: Poverty, stress and isolation lead to maltreatment.
The new studies suggest a vicious multigenerational circle that affects a horrifyingly large number of children, making them more vulnerable to stress when they grow up and become parents themselves.
Twenty percent of American children grow up in poverty, and this number has been rising, not falling. Nearly a million are maltreated. The new studies show that this damages children, and perhaps even their children's children, at the most fundamental biological level.
From Ferguson to Gaza, this has been a summer of outrage. But just how outraged people are often seems to depend on which group they belong to. Polls show that many more African-Americans think that Michael Brown's shooting by a Ferguson police officer was unjust than white Americans. How indignant you are about Hamas rockets or Israeli attacks that kill civilians often depends on whether you identify with the Israelis or the Palestinians. This is true even when people agree about the actual facts.
You might think that such views are a matter of history and context, and that is surely partly true. But a new study in the Proceedings of the National Academy of Sciences suggests that they may reflect a deeper fact about human nature. Even young children are more indignant about injustice when it comes from "them" and is directed at "us." And that is true even when "them" and "us" are defined by nothing more than the color of your hat.
Jillian Jordan, Kathleen McAuliffe and Felix Warneken at Harvard University looked at what economists and evolutionary biologists dryly call "costly third-person norm-violation punishment" and the rest of us call "righteous outrage." We take it for granted that someone who sees another person act unfairly will try to punish the bad guy, even at some cost to themselves.
From a purely economic point of view, this is puzzling—after all, the outraged person is doing fine themselves. But enforcing fairness helps ensure social cooperation, and we humans are the most cooperative of primates. So does outrage develop naturally, or does it have to be taught?
The experimenters gave some 6-year-old children a pile of Skittles candy. Then they told them that earlier on, another pair of children had played a Skittle-sharing game. For example, Johnny got six Skittles, and he could choose how many to give to Henry and how many to keep. Johnny had either divided the candies fairly or kept them all for himself.
Now the children could choose between two options. If they pushed a lever to the green side, Johnny and Henry would keep their Skittles, and so would the child. If they pushed it to the red side, all six Skittles would be thrown away, and the children would lose a Skittle themselves as well. Johnny would be punished, but they would lose too.
When Johnny was fair, the children pushed the lever to green. But when Johnny was selfish, the children acted as if they were outraged. They were much more likely to push the lever to red—even though that meant they would lose themselves.
How would being part of a group influence these judgments? The experimenters let the children choose a team. The blue team wore blue hats, and the yellow team wore yellow. They also told the children whether Johnny and Henry each belonged to their team or the other one.
The teams were totally arbitrary: There was no poisonous past, no history of conflict. Nevertheless, the children proved more likely to punish Johnny's unfairness if he came from the other team. They were also more likely to punish him if Henry, the victim, came from their own team.
As soon as they showed that they were outraged at all, the children were more outraged by "them" than "us." This is a grim result, but it fits with other research. Children have impulses toward compassion and justice—the twin pillars of morality—much earlier than we would have thought. But from very early on, they tend to reserve compassion and justice for their own group.
There was a ray of hope, though. Eight-year-olds turned out to be biased toward their own team but less biased than the younger children. They had already seemed to widen their circle of moral concern beyond people who wear the same hats. We can only hope that, eventually, the grown-up circle will expand to include us all.
In a shifty world, surely the one thing we can rely on is the evidence of our own eyes. I may doubt everything else, but I have no doubts about what I see right now. Even if I'm stuck in The Matrix, even if the things I see aren't real—I still know that I see them.
Or do I?
A new paper in the journal Trends in Cognitive Sciences by the New York University philosopher Ned Block demonstrates just how hard it is to tell if we really know what we see. Right now it looks to me as if I see the entire garden in front of me, each of the potted succulents, all of the mossy bricks, every one of the fuchsia blossoms. But I can only pay attention to and remember a few things at a time. If I just saw the garden for an instant, I'd only remember the few plants I was paying attention to just then.
How about all the things I'm not paying attention to? Do I actually see them, too? It may just feel as if I see the whole garden because I quickly shift my attention from the blossoms to the bricks and back.
Every time I attend to a particular plant, I see it clearly. That might make me think that I was seeing it clearly all along, like somebody who thinks the refrigerator light is always on, because it always turns on when you open the door to look. This "refrigerator light" illusion might make me think I see more than I actually do.
On the other hand, maybe I do see everything in the garden—it's just that I can't remember and report everything I see, only the things I pay attention to. But how can I tell if I saw something if I can't remember it?
Prof. Block focuses on a classic experiment originally done in 1960 by George Sperling, a cognitive psychologist at the University of California, Irvine. (You can try the experiment yourself online.) Say you see a three-by-three grid of nine letters flash up for a split second. What letters were they? You will only be able to report a few of them.
Now suppose the experimenter tells you that if you hear a high-pitched noise you should focus on the first row, and if you hear a low-pitched noise you should focus on the last row. This time, not surprisingly, you will accurately report all three letters in the cued row, though you can't report the letters in the other rows.
But here's the trick. Now you only hear the noise after the grid has disappeared. You will still be very good at remembering the letters in the cued row. But think about it—you didn't know beforehand which row you should focus on. So you must have actually seen all the letters in all the rows, even though you could only access and report a few of them at a time. It seems as if we do see more than we can say.
Or do we? Here's another possibility. We know that people can extract some information from images they can't actually see—in subliminal perception, for example. Perhaps you processed the letters unconsciously, but you didn't actually see them until you heard the cue. Or perhaps you just saw blurred fragments of the letters.
Prof. Block describes many complex and subtle further experiments designed to distinguish these options, and he concludes that we do see more than we remember.
But however the debate gets resolved, the real moral is the same. We don't actually know what we see at all! You can do the Sperling experiment hundreds of times and still not be sure whether you saw the letters. Philosophers sometimes argue that our conscious experience can't be doubted because it feels so immediate and certain. But scientists tell us that feeling is an illusion, too.
Augie, my 2-year-old grandson, is working on his soufflés. This began by accident. Grandmom was trying to simultaneously look after a toddler and make dessert. But his delight in soufflé-making was so palpable that it has become a regular event.
The bar, and the soufflé, rise higher on each visit—each time he does a bit more and I do a bit less. He graduated from pushing the Cuisinart button and weighing the chocolate, to actually cracking and separating the eggs. Last week, he gravely demonstrated how you fold in egg whites to his clueless grandfather. (There is some cultural inspiration from Augie's favorite Pixar hero, Remy the rodent chef in "Ratatouille," though this leads to rather disturbing discussions about rats in the kitchen.)
It's startling to see just how enthusiastically and easily a 2-year-old can learn such a complex skill. And it's striking how different this kind of learning is from the kind children usually do in school.
New studies in the journal Human Development by Barbara Rogoff at the University of California, Santa Cruz and colleagues suggest that this kind of learning may actually be more fundamental than academic learning, and it may also influence how helpful children are later on.
Dr. Rogoff looked at children in indigenous Mayan communities in Latin America. She found that even toddlers do something she calls "learning by observing and pitching in." Like Augie with the soufflés, these children master useful, difficult skills, from making tortillas to using a machete, by watching the grown-ups around them intently and imitating the simpler parts of the process. Grown-ups gradually encourage them to do more—the pitching-in part. The product of this collaborative learning is a genuine contribution to the family and community: a delicious meal instead of a standardized test score.
This kind of learning has some long-term consequences, Dr. Rogoff suggests. She and her colleagues also looked at children growing up in Mexico City who either came from an indigenous heritage, where this kind of observational learning is ubiquitous, or a more Europeanized tradition. When they were 8 the children from the indigenous traditions were much more helpful than the Europeanized children: They did more work around the house, more spontaneously, including caring for younger siblings. And children from an indigenous heritage had a fundamentally different attitude toward helping. They didn't need to be asked to help—instead they were proud of their ability to contribute.
The Europeanized children and parents were more likely to negotiate over helping. Parents tried all kinds of different contracts and bargains, and different regimes of rewards and punishments. Mostly, as readers will recognize with a sigh, these had little effect. For these children, household chores were something that a grown-up made you do, not something you spontaneously contributed to the family.
Dr. Rogoff argues that there is a connection between such early learning by pitching in and the motivation and ability of school-age children to help. In the indigenous-tradition families, the toddler's enthusiastic imitation eventually morphed into real help. In the more Europeanized families, the toddler's abilities were discounted rather than encouraged.
The same kind of discounting happens in my middle-class American world. After all, when I make the soufflé without Augie's help there's a much speedier result and a lot less chocolate fresco on the walls. And it's true enough that in our culture, in the long run, learning to make a good soufflé or to help around the house, or to take care of a baby, may be less important to your success as an adult than more academic abilities.
But by observing and pitching in, Augie may be learning something even more fundamental than how to turn eggs and chocolate into soufflé. He may be learning how to turn into a responsible grown-up himself.
Could what we eat shape how we think? A new paper in the journal Science by Thomas Talhelm at the University of Virginia and colleagues suggests that agriculture may shape psychology. A bread culture may think differently than a rice-bowl society.
Psychologists have long known that different cultures tend to think differently. In China and Japan, people think more communally, in terms of relationships. By contrast, people are more individualistic in what psychologist Joseph Henrich, in commenting on the new paper, calls "WEIRD cultures."
WEIRD stands for Western, educated, industrialized, rich and democratic. Dr. Henrich's point is that cultures like these are actually a tiny minority of all human societies, both geographically and historically. But almost all psychologists study only these WEIRD folks.
The differences show up in surprisingly varied ways. Suppose I were to ask you to draw a graph of your social network, with you and your friends represented as circles attached by lines. Americans make their own circle a quarter-inch larger than their friends' circles. In Japan, people make their own circle a bit smaller than the others.
Or you can ask people how much they would reward the honesty of a friend or a stranger and how much they would punish their dishonesty. Most Easterners tend to say they would reward a friend more than a stranger and punish a friend less; Westerners treat friends and strangers more equally.
These differences show up even in tests that have nothing to do with social relationships. You can give people a "Which of these things belongs together?" problem, like the old "Sesame Street" song. Say you see a picture of a dog, a rabbit and a carrot. Westerners tend to say the dog and the rabbit go together because they're both animals—they're in the same category. Easterners are more likely to say that the rabbit and the carrot go together—because rabbits eat carrots.
None of these questions has a right answer, of course. So why have people in different parts of the world developed such different thinking styles?
You might think that modern, industrial cultures would naturally develop more individualism than agricultural ones. But another possibility is that the kind of agriculture matters. Rice farming, in particular, demands a great deal of coordinated labor. To manage a rice paddy, a whole village has to cooperate and coordinate irrigation systems. By contrast, a single family can grow wheat.
Dr. Talhelm and colleagues used an ingenious design to test these possibilities. They looked at rice-growing and wheat-growing regions within China. (The people in these areas had the same language, history and traditions; they just grew different crops.) Then they gave people the psychological tests I just described. The people in wheat-growing areas looked more like WEIRD Westerners, but the rice growers showed the more classically Eastern communal and relational patterns. Most of the people they tested didn't actually grow rice or wheat themselves, but the cultural traditions of rice or wheat seemed to influence their thinking.
This agricultural difference predicted the psychological differences better than modernization did. Even industrialized parts of China with a rice-growing history showed the more communal thinking pattern.
The researchers also looked at two measures of what people do outside the lab: divorces and patents for new inventions. Conflict-averse communal cultures tend to have fewer divorces than individualistic ones, but they also create fewer individual innovations. Once again, wheat-growing areas looked more "WEIRD" than rice-growing ones.
In fact, Dr. Henrich suggests that rice-growing may have led to the psychological differences, which in turn may have sparked modernization. Aliens from outer space looking at the Earth in the year 1000 would never have bet that barbarian Northern Europe would become industrialized before civilized Asia. And they would surely never have guessed that eating sandwiches instead of stir-fry might make the difference.
Why do I exist? This isn't a philosophical cri de coeur; it's an evolutionary conundrum. At 58, I'm well past menopause, and yet I'll soldier on, with luck, for many years more. The conundrum is more vivid when you realize that human beings (and killer whales) are the only species where females outlive their fertility. Our closest primate relatives—chimpanzees, for example—usually die before their 50s, when they are still fertile.
It turns out that my existence may actually be the key to human nature. This isn't a megalomaniacal boast but a new biological theory: the "grandmother hypothesis." Twenty years ago, the anthropologist Kristen Hawkes at the University of Utah went to study the Hadza, a forager group in Africa, thinking that she would uncover the origins of hunting. But then she noticed the many wiry old women who dug roots and cooked dinners and took care of babies (much like me, though my root-digging skills are restricted to dividing the irises). It turned out that these old women played an important role in providing nutrition for the group, as much as the strapping young hunters. What's more, those old women provided an absolutely crucial resource by taking care of their grandchildren. This isn't just a miracle of modern medicine. Our human life expectancy is much longer than it used to be—but that's because far fewer children die in infancy. Anthropologists have looked at life spans in hunter-gatherer and forager societies, which are like the societies we evolved in. If you make it past childhood, you have a good chance of making it into your 60s or 70s.
There are many controversies about what happened in human evolution. But there's no debate that there were two dramatic changes in what biologists call our "life-history": Besides living much longer than our primate relatives, our babies depend on adults for much longer.
Young chimps gather as much food as they eat by the time they are 7 or so. But even in forager societies, human children pull their weight only when they are teenagers. Why would our babies be helpless for so long? That long immaturity helps make us so smart: It gives us a long protected time to grow large brains and to use those brains to learn about the world we live in. Human beings can learn to adapt to an exceptionally wide variety of environments, and those skills of learning and culture develop in the early years of life.
But that immaturity has a cost. It means that biological mothers can't keep babies going all by themselves: They need help. In forager societies grandmothers provide a substantial amount of child care as well as nutrition. Barry Hewlett at Washington State University and his colleagues found, much to their surprise, that grandmothers even shared breast-feeding with mothers. Some grandmoms just served as big pacifiers, but some, even after menopause, could "relactate," actually producing milk. (Though I think I'll stick to the high-tech, 21st-century version of helping to feed my 5-month-old granddaughter with electric pumps, freezers and bottles.)
Dr. Hawkes's "grandmother hypothesis" proposes that grandmotherhood developed in tandem with our long childhood. In fact, she argues that the evolution of grandmothers was exactly what allowed our long childhood, and the learning and culture that go with it, to emerge. In mathematical models, you can see what happens if, at first, just a few women live past menopause and use that time to support their grandchildren (who, of course, share their genes). The "grandmother trait" can rapidly take hold and spread. And the more grandmothers contribute, the longer the period of immaturity can be.
So on Mother's Day this Sunday, as we toast mothers over innumerable Bloody Marys and Eggs Benedicts across the country, we might add an additional toast for the gray-haired grandmoms behind the scenes.
We human beings spend hours each day telling and hearing stories. We always have. We’ve passed heroic legends around hunting fires, kitchen tables and the web, and told sad tales of lost love on sailing ships, barstools and cell phones. We’ve been captivated by Oedipus and Citizen Kane and Tony Soprano.
Why? Why not just communicate information through equations or lists of facts? Why is it that even when we tell the story of our own random, accidental lives we impose heroes and villains, crises and resolutions?
You might think that academic English and literature departments, departments that are devoted to stories, would have tried to answer this question or would at least want to hear from scientists who had. But, for a long time, literary theory was dominated by zombie ideas that had died in the sciences. Marx and Freud haunted English departments long after they had disappeared from economics and psychology.
Recently, though, that has started to change. Literary scholars are starting to pay attention to cognitive science and neuroscience. Admittedly, some of the first attempts were misguided and reductive – “evolutionary psychology” just-so stories or efforts to locate literature in a particular brain area. But the conversation between literature and science is becoming more and more sophisticated and interesting.
At a fascinating workshop at Stanford last month called “The Science of Stories” scientists and scholars talked about why reading Harlequin romances may make you more empathetic, about how ten-year-olds create the fantastic fictional worlds called “paracosms”, and about the subtle psychological inferences in the great Chinese novel, the Story of the Stone.
One of the most interesting and surprising results came from the neuroscientist Uri Hasson at Princeton. As techniques for analyzing brain-imaging data have gotten more sophisticated, neuroscientists have gone beyond simply mapping particular brain regions to particular psychological functions. Instead, they use complex mathematical analyses to look for patterns in the activity of the whole brain as it changes over time. Hasson and his colleagues have gone beyond even that. They measure the relationship between the pattern in one person’s brain and the pattern in another’s.
They’ve been especially interested in how brains respond to stories, whether they’re watching a Clint Eastwood movie, listening to a Salinger short story, or just hearing someone’s personal “How We Met” drama. When different people watched the same vivid story as they lay in the scanner -- “The Good, the Bad and the Ugly”, for instance, -- their brain activity unfolded in a remarkably similar way. Sergio Leone really knew how to get into your head.
In another experiment they recorded the pattern of one person’s brain activity as she told a vivid personal story. Then someone else listened to the story on tape and they recorded his brain activity. Again, there was a remarkable degree of correlation between the two brain patterns. The storyteller, like Leone, had literally gotten in to the listener’s brain and altered it in predictable ways. But more than that, she had made the listener’s brain match her own brain.
The more tightly coupled the brains became, the more the listener said that he understood the story. This coupling effect disappeared if you scrambled the sentences in the story. There was something about the literary coherence of the tale that seemed to do the work.
One of my own favorite fictions, Star Trek, often includes stories about high-tech telepathic mind-control. Some alien has special powers that allows them to shape another person’s brain activity to match their own, or that produces brains that are so tightly linked that you can barely distinguish them. Hasson’s results suggest that we lowly humans are actually as good at mind-melding as the Vulcans or the Borg. We just do it with stories.
Are young children stunningly dumb or amazingly smart? We usually think that children are much worse at solving problems than we are. After all, they can’t make lunch or tie their shoes, let alone figure out long division or ace the SAT’s. But, on the other hand, every parent finds herself exclaiming “Where did THAT come from!” all day long.
So we also have a sneaking suspicion that children might be a lot smarter than they seem. A new study from our lab that just appeared in the journal Cognition shows that four-year-olds may actually solve some problems better than grown-ups do.
Chris Lucas, Tom Griffiths, Sophie Bridgers and I wanted to know how preschoolers learn about cause and effect. We used a machine that lights up when you put some combinations of blocks on it and not others. Your job is to figure out which blocks make it go. (Actually, we secretly activate the machine with a hidden pedal. but fortunately nobody ever guesses that).
Try it yourself. Imagine that you, a clever grown-up, see me put a round block on the machine three times. Nothing happens. But when I put a square block on next to the round one the machine lights up. So the square one makes it go and the round one doesn’t, right?
Well, not necessarily. That’s true if individual blocks light up the machine. That’s the obvious idea and the one that grown-ups always think of first. But the machine could also work in a more unusual way. It could be that it takes a combination of two blocks to make the machine go, the way that my annoying microwave will only go if you press both the “cook” button and the “start” button. Maybe the square and round blocks both contribute, but they have to go on together.
Suppose I also show you that a triangular block does nothing and a rectangular one does nothing, but the machine lights up when you put them on together. That should tell you that the machine follows the unusual combination rule instead of the obvious individual block rule. Will that change how you think about the square and round blocks?
We showed patterns like these to kids ages 4 and 5 as well as to Berkeley undergraduates. First we showed them the triangle/rectangle kind of pattern, which suggested that the machine might use the unusual combination rule. Then we showed them the ambiguous round/square kind of pattern.
The kids got it. They figured out that the machine might work in this unusual way and so that you should put both blocks on together. But the best and brightest students acted as if the machine would always follow the common and obvious rule, even when we showed them that it might work differently.
Does this go beyond blocks and machines? We think it might reflect a much more general difference between children and adults. Children might be especially good at thinking about unlikely possibilities. After all, grown-ups know a tremendous amount about how the world works. It makes sense that we mostly rely on what we already know.
In fact, computer scientists talk about two different kinds of learning and problem solving – “exploit” versus “explore.” In “exploit” learning we try to quickly find the solution that is most likely to work right now. In “explore” learning we try out lots of possibilities, including unlikely ones, even if they may not have much immediate pay-off. To thrive in a complicated world you need both kinds of learning.
A particularly effective strategy is to start off exploring and then narrow in to exploit. Childhood, especially our unusually long and helpless human childhood, may be evolution’s way of balancing exploration and exploitation. Grown-ups stick with the tried and true; 4-year-olds have the luxury of looking for the weird and wonderful.
How do a few pounds of gray goo in our skulls create our conscious experience—the blue of the sky, the tweet of the birds? Few questions are so profound and important—or so hard. We are still very far from an answer. But we are learning more about what scientists call "the neural correlates of consciousness," the brain states that accompany particular kinds of conscious experience.
Most of these studies look at the sort of conscious experiences that people have in standard FMRI brain-scan experiments or that academics like me have all day long: bored woolgathering and daydreaming punctuated by desperate bursts of focused thinking and problem-solving. We've learned quite a lot about the neural correlates of these kinds of consciousness.
But some surprising new studies have looked for the correlates of more exotic kinds of consciousness. Psychedelic drugs such as LSD were designed to be used in scientific research and, potentially at least, as therapy for mental illness. But of course, those drugs long ago escaped from the lab into the streets. They disappeared from science as a result. Recently, though, scientific research on hallucinogens has been making a comeback.
Robin Carhart-Harris at Imperial College London and his colleagues review their work on psychedelic neuroscience in a new paper in the journal Frontiers in Neuroscience. Like other neuroscientists, they put people in FMRI brain scanners. But these scientists gave psilocybin—the active ingredient in consciousness-altering "magic mushrooms"—to volunteers with experience with psychedelic drugs. Others got a placebo. The scientists measured both groups' brain activity.
Normally, when we introspect, daydream or reflect, a group of brain areas called the "default mode network" is particularly active. These areas also seem to be connected to our sense of self. Another brain-area group is active when we consciously pay attention or work through a problem. In both rumination and attention, parts of the frontal cortex are particularly involved, and there is a lot of communication and coordination between those areas and other parts of the brain.
Some philosophers and neuroscientists have argued that consciousness itself is the result of this kind of coordinated brain activity. They think consciousness is deeply connected to our sense of the self and our capacities for reflection and control, though we might have other fleeting or faint kinds of awareness.
But what about psychedelic consciousness? Far from faint or fleeting, psychedelic experiences are more intense, vivid and expansive than everyday ones. So you might expect to see that the usual neural correlates of consciousness would be especially active when you take psilocybin. That's just what the scientists predicted. But consistently, over many experiments, they found the opposite. On psilocybin, the default mode network and frontal control systems were actually much less active than normal, and there was much less coordination between different brain areas. In fact, "shroom" consciousness looked neurologically like the inverse of introspective, reflective, attentive consciousness.
The researchers also got people to report on the quality of their psychedelic experiences. The more intense the experiences were and particularly, the more that people reported that they had lost the sense of a boundary between themselves and the world, the more they showed the distinctive pattern of deactivation.
Dr. Carhart-Harris and colleagues suggest the common theory that links consciousness and control is wrong. Instead, much of the brain activity accompanying workaday consciousness may be devoted to channeling, focusing and even shutting down experience and information, rather than creating them. The Carhart-Harris team points to other uncontrolled but vivid kinds of consciousness such as dreams, mystical experiences, early stages of psychosis and perhaps even infant consciousness as parallels to hallucinogenic drug experience.
To paraphrase Hamlet, it turns out that there are more, and stranger, kinds of consciousness than are dreamt of in our philosophy.
Two new studies in the journal Cognition describe how some brilliant decision makers expertly use probability for profit.
But you won't meet these economic whizzes at the World Economic Forum in Switzerland this month. Unlike the "Davos men," these analysts require a constant supply of breasts, bottles, shiny toys and unconditional adoration (well, maybe not so unlike the Davos men). Although some of them make do with bananas. The quants in question are 10-month-old babies and assorted nonhuman primates.
Ordinary grown-ups are terrible at explicit probabilistic and statistical reasoning. For example, how likely is it that there will be a massive flood in America this year? How about an earthquake leading to a massive flood in California? People illogically give the first event a lower likelihood than the second. But even babies and apes turn out to have remarkable implicit statistical abilities.
Stephanie Denison at the University of Waterloo in Canada and Fei Xu at the University of California, Berkeley, showed babies two large transparent jars full of lollipop-shaped toys. Some of the toys had plain black tops while some were pink with stars, glitter and blinking lights. Of course, economic acumen doesn't necessarily imply good taste, and most of the babies preferred pink bling to basic black.
The two jars had different proportions of black and pink toys. For example, one jar contained 12 pink and four black toys. The other jar had 12 pink toys too but also contained 36 black toys. The experimenter took out a toy from one jar, apparently at random, holding it by the "pop" so that the babies couldn't see what color it was. Then she put it in an opaque cup on the floor. She took a toy from the second jar in the same way and put it in another opaque cup. The babies crawled toward one cup or the other and got the toy. (Half the time she put the first cup in front of the first jar, half the time she switched them around.)
What should you do in this situation if you really want pink lollipops? The first cup is more likely to have a pink pop inside than the second, the odds are 3-1 versus 1-3, even though both jars have exactly the same number of pink toys inside. It isn't a sure thing, but that is where you would place your bets.
So did the babies. They consistently crawled to the cup that was more likely to have a pink payoff. In a second experiment, one jar had 16 pink and 4 black toys, while the other had 24 pink and 96 black ones. The second jar actually held more pink toys than the first one, but the cup was less likely to hold a pink toy. The babies still went for the rational choice.
In the second study, Hannes Rackoczy at the University of Göttingen in Germany and his colleagues did a similar experiment with a group of gorillas, bonobos, chimps and orangutans. They used banana and carrot pieces, and the experimenter hid the food in one or the other hand, not a cup. But the scientists got the same results: The apes chose the hand that was more likely to hold a banana.
So it seems that we're designed with a basic understanding of probability. The puzzle is this: Why are grown-ups often so stupid about probabilities when even babies and chimps can be so smart?
This intuitive, unconscious statistical ability may be completely separate from our conscious reasoning. But other studies suggest that babies' unconscious understanding of numbers may actually underpin their ability to explicitly learn math later. We don't usually even try to teach probability until high school. Maybe we could exploit these intuitive abilities to teach children, and adults, to understand probability better and to make better decisions as a result.
The Gopnik lab is rejoicing. My student Caren Walker and I have just published a paper in the well known journal Psychological Science. Usually when I write about scientific papers here, they sound neat and tidy. But since this was our own experiment, I can tell you the messy inside story too.
First, the study—and a small IQ test for you. Suppose you see an experimenter put two orange blocks on a machine, and it lights up. She then puts a green one and a blue one on the same machine, but nothing happens. Two red ones work, a black and white combination doesn't. Now you have to make the machine light up yourself. You can choose two purple blocks or a yellow one and a brown one.
But this simple problem actually requires some very abstract thinking. It's not that any particular block makes the machine go. It's the fact that the blocks are the same rather than different. Other animals have a very hard time understanding this. Chimpanzees can get hundreds of examples and still not get it, even with delicious bananas as a reward. As a clever (or even not so clever) reader of this newspaper, you'd surely choose the two purple blocks.
The conventional wisdom has been that young children also can't learn this kind of abstract logical principle. Scientists like Jean Piaget believed that young children's thinking was concrete and superficial. And in earlier studies, preschoolers couldn't solve this sort of "same/different" problem.
But in those studies, researchers asked children to say what they thought about pictures of objects. Children often look much smarter when you watch what they do instead of relying on what they say.
We did the experiment I just described with 18-to-24-month-olds. And they got it right, with just two examples. The secret was showing them real blocks on a real machine and asking them to use the blocks to make the machine go.
Tiny toddlers, barely walking and talking, could quickly learn abstract relationships. And they understood "different" as well as "same." If you reversed the examples so that the two different blocks made the machine go, they would choose the new, "different" pair.
The brilliant scientists of the Gopnik lab must have realized that babies could do better than prior research suggested and so designed this elegant experiment, right? Not exactly. Here's what really happened: We were doing a totally different experiment.
My student Caren wanted to see whether getting children to explain an event made them think about it more abstractly. We thought that a version of the "same block" problem would be tough for 4-year-olds and having them explain might help. We actually tried a problem a bit simpler than the one I just described, because the experimenter put the blocks on the machine one at a time instead of simultaneously. The trouble was that the 4-year-olds had no trouble at all! Caren tested 3-year-olds, then 2-year-olds and finally the babies, and they got it too.
We sent the paper to the journal. All scientists occasionally (OK, more than occasionally) curse journal editors and reviewers, but they contributed to the discovery too. They insisted that we do the more difficult simultaneous version of the task with babies and that we test "different" as well as "same." So we went back to the lab, muttering that the "different" task would be too hard. But we were wrong again.
Now we are looking at another weird result. Although the 4-year-olds did well on the easier sequential task, in a study we're still working on, they actually seem to be doing worse than the babies on the harder simultaneous one. So there's a new problem for us to solve.
Scientists legitimately worry about confirmation bias, our tendency to look for evidence that fits what we already think. But, fortunately, learning is most fun, for us and 18-month-olds too, when the answers are most surprising.
Scientific discoveries aren't about individual geniuses miraculously grasping the truth. Instead, they come when we all chase the unexpected together.
Over the past decade, popular science has been suffering from neuromania. The enthusiasm came from studies showing that particular areas of the brain “light up” when you have certain thoughts and experiences. It’s mystifying why so many people thought this explained the mind. What have you learned when you say that someone’s visual areas light up when they see things?
People still seem to be astonished at the very idea that the brain is responsible for the mind—a bunch of grey goo makes us see! It is astonishing. But scientists knew that a century ago; the really interesting question now is how the grey goo lets us see, think and act intelligently. New techniques are letting scientists understand the brain as a complex, dynamic, computational system, not just a collection of individual bits of meat associated with individual experiences. These new studies come much closer to answering the “how” question.
Take a study in the journal Nature this year by Stefano Fusi of Columbia University College of Physicians and Surgeons, Earl K. Miller of the Massachusetts Instutute of Technology and their colleagues. Fifty years ago David Hubel and Torsten Weisel made a great Nobel Prize-winning discovery. They recorded the signals from particular neurons in cats’ brains as the animals looked at different patterns. The neurons responded selectively to some images rather than others. One neuron might only respond to lines that slanted right, another only to those slanting left.
But many neurons don’t respond in this neatly selective way. This is especially true for the neurons in the parts of the brain that are associated with complex cognition and problem-solving, like the prefrontal cortex. Instead, these cells were a mysterious mess—they respond idiosyncratically to different complex collections of features. What were these neurons doing?
In the new study the researchers taught monkeys to remember and respond to one shape rather than another while they recorded their brain activity. But instead of just looking at one neuron at a time, they recorded the activity of many prefrontal neurons at once. A number of them showed weird, messy “mixed selectivity” patterns. One neuron might respond when the monkey remembered just one shape or only when it recognized the shape but not when it recalled it, while a neighboring cell showed a different pattern.
In order to analyze how the whole group of cells worked the researchers turned to the techniques of computer scientists who are trying to design machines that can learn. Computers aren’t made of carbon, of course, let alone neurons. But they have to solve some of the same problems, like identifying and remembering patterns. The techniques that work best for computers turn out to be remarkably similar to the techniques that brains use.
Essentially, the researchers found the brain was using the same general sort of technique that Google uses for its search algorithm. You might think that the best way to rank search results would be to pick out a few features of each Web page like “relevance” or “trustworthiness’”—in the same way as the neurons picked out whether an edge slanted right or left. Instead, Google does much better by combining all the many, messy, idiosyncratic linking decisions of individual users.
With neurons that detect just a few features, you can capture those features and combinations of features, but not much more. To capture more complex patterns, the brain does better by amalgamating and integrating information from many different neurons with very different response patterns. The brain crowd-sources.
Scientists have long argued that the mind is more like a general software program than like a particular hardware set-up. The new combination of neuroscience and computer science doesn’t just tell us that the grey goo lets us think, or even exactly where that grey goo is. Instead, it tells us what programs it runs.
Scientists are getting a clearer idea of what ‘programs’ the mind runs.
Imagine a scientist peeking into the skulls of glow-in-the-dark, cocaine-loving mice and watching their nerve cells send out feelers. It may sound more like something from cyberpunk writer William Gibson than from the journal Nature Neuroscience. But this kind of startling experiment promises to change how we think about the brain and mind.
Scientific progress often involves new methods as much as new ideas. The great methodological advance of the past few decades was Functional Magnetic Resonance Imaging: It lets scientists see which areas of the brain are active when a person thinks something.
But scientific methods can also shape ideas, for good and ill. The success of fMRI led to a misleadingly static picture of how the brain works, particularly in the popular imagination. When the brain lights up to show the distress of a mother hearing her baby cry, it's tempting to say that motherly concern is innate.
But that doesn't follow at all. A learned source of distress can produce the same effect. Logic tells you that every time we learn something, our brains must change, too. In fact, that kind of change is the whole point of having a brain in the first place. The fMRI pictures of brain areas "lighting up" don't show those changes. But there are remarkable new methods that do, at least for mice.
Slightly changing an animal's genes can make it produce fluorescent proteins. Scientists can use a similar technique to make mice with nerve cells that light up. Then they can see how the mouse neurons grow and connect through a transparent window in the mouse's skull.
The study that I cited from Nature Neuroscience, by Linda Wilbrecht and her colleagues, used this technique to trace one powerful and troubling kind of learning—learning to use drugs. Cocaine users quickly learn to associate their high with a particular setting, and when they find themselves there, the pull of the drug becomes particularly irresistible.
First, the researchers injected mice with either cocaine or (for the control group) salt water and watched what happened to the neurons in the prefrontal part of their brains, where decisions get made. The mice who got cocaine developed more "dendritic spines" than the other mice—their nerve cells sent out more potential connections that could support learning. So cocaine, just by itself, seems to make the brain more "plastic," more susceptible to learning.
But a second experiment was even more interesting. Mice, like humans, really like cocaine. The experimenters gave the mice cocaine on one side of the cage but not the other, and the mice learned to go to that side of the cage. The experimenters recorded how many new neural spines were formed and how many were still there five days later.
All the mice got the same dose of cocaine, but some of them showed a stronger preference for the cocaine side of the cage than others—they had learned the association between the cage and the drug better. The mice who learned better were much more likely to develop persistent new spines. The changes in behavior were correlated to changes in the brain.
It could be that some mice were more susceptible to the effects of the cocaine, which produced more spines, which made them learn better. Or it could be that the mice who were better learners developed more persistent spines.
We don't know how this drug-induced learning compares to more ordinary kinds of learning. But we do know, from similar studies, that young mice produce and maintain more new spines than older mice. So it may be that the quick, persistent learning that comes with cocaine, though destructive, is related to the profound and extensive learning we see early in life, in both mice and men.
How can you decide whether to have a child? It’s a complex and profound question—a philosophical question. But it’s not a question traditional philosophers thought about much. In fact, the index of the 1967 “Encyclopedia of Philosophy” had only four references to children at all—though there were hundreds of references to angels. You could read our deepest thinkers and conclude that humans reproduced through asexual cloning.
Recently, though, the distinguished philosopher L.A. Paul (who usually works on abstruse problems in the metaphysics of causation) wrote a fascinating paper, forthcoming in the journal Res Philosophica. Prof. Paul argues that there is no rational way to decide to have children—or not to have them.
How do we make a rational decision? The classic answer is that we imagine the outcomes of different courses of action. Then we consider both the value and the probability of each outcome. Finally, we choose the option with the highest “utilities,” as the economists say. Does the glow of a baby’s smile outweigh all those sleepless nights?
It’s not just economists. You can find the same picture in the advice columns of Vogue and Parenting. In the modern world, we assume that we can decide whether to have children based on what we think the experience of having a child will be like.
But Prof. Paul thinks there’s a catch. The trouble is that, notoriously, there is no way to really know what having a child is like until you actually have one. You might get hints from watching other people’s children. But that overwhelming feeling of love for this one particular baby just isn’t something you can understand beforehand. You may not even like other people’s children and yet discover that you love your own child more than anything. Of course, you also can’t really understand the crushing responsibility beforehand, either. So, Prof. Paul says, you just can’t make the decision rationally.
I think the problem may be even worse. Rational decision-making assumes there is a single person with the same values before and after the decision. If I’m trying to decide whether to buy peaches or pears, I can safely assume that if I prefer peaches now, the same “I” will prefer them after my purchase. But what if making the decision turns me into a different person with different values?
Part of what makes having a child such a morally transformative experience is the fact that my child’s well-being can genuinely be more important to me than my own. It may sound melodramatic to say that I would give my life for my children, but, of course, that’s exactly what every parent does all the time, in ways both large and small.
Once I commit myself to a child, I’m literally not the same person I was before. My ego has expanded to include another person even though—especially though—that person is utterly helpless and unable to reciprocate.
The person I am before I have children has to make a decision for the person I will be afterward. If I have kids, chances are that my future self will care more about them than just about anything else, even her own happiness, and she’ll be unable to imagine life without them. But, of course, if I don’t have kids, my future self will also be a different person, with different interests and values. Deciding whether to have children isn’t just a matter of deciding what you want. It means deciding who you’re going to be.
L.A. Paul, by the way, is, like me, both a philosopher and a mother—a combination that’s still surprisingly rare. There are more and more of us, though, so maybe the 2067 Encyclopedia of Philosophy will have more to say on the subject of children. Or maybe even philosopher-mothers will decide it’s easier to stick to thinking about angels.
Imagine that you are a radiologist searching through slides of lung tissue for abnormalities. On one slide, right next to a suspicious nodule, there is the image of a large, threatening, gorilla. What would you do? Write to the American Medical Association? Check yourself into the schizophrenia clinic next door? Track down the practical joker among the lab technicians?
In fact, you probably wouldn’t do anything. That is because, although you were staring right at the gorilla, you probably wouldn’t have seen it. That startling fact shows just how little we understand about consciousness.
In the journal Psychological Science, Trafton Drew and colleagues report that they got radiologists to look for abnormalities in a series of slides, as they usually do. But then they added a gorilla to some of the slides. The gorilla gradually faded into the slides and then gradually faded out, since people are more likely to notice a sudden change than a gradual one. When the experimenters asked the radiologists if they had seen anything unusual, 83% said no. An eye-tracking machine showed that radiologists missed the gorilla even when they were looking straight at it.
This study is just the latest to demonstrate what psychologists call “inattentional blindness.” When we pay careful attention to one thing, we become literally blind to others—even startling ones like gorillas.
In one classic study, Dan Simons and Christopher Chabris showed people a video of students passing a ball around. They asked the viewers to count the number of passes, so they had to pay attention to the balls. In the midst of the video, someone in a gorilla suit walked through the players. Most of the viewers, who were focused on counting the balls, didn’t see the gorilla at all. You can experience similar illusions yourself at invisiblegorilla.com. It is an amazingly robust phenomenon—I am still completely deceived by each new example.
You might think this is just a weird thing that happens with videos in a psychology lab. But in the new study, the radiologists were seasoned professionals practicing a real and vitally important skill. Yet they were also blind to the unexpected events.
In fact, we are all subject to inattentional blindness all the time. That is one of the foundations of magic acts. Psychologists have started collaborating with professional magicians to figure out how their tricks work. It turns out that if you just keep your audience’s attention focused on the rabbit, they literally won’t even see what you’re doing with the hat.
Inattentional blindness is as important for philosophers as it is for radiologists and magicians. Many philosophers have claimed that we can’t be wrong about our conscious experiences. It certainly feels that way. But these studies are troubling. If you asked the radiologist about the gorilla, she’d say that she just experienced a normal slide in exactly the way she experienced the other slides—except that we know that can’t be true. Did she have the experience of seeing the gorilla and somehow not know it? Or did she experience just the part of the slide with the nodule and invent the gorilla-free remainder?
At this very moment, as I stare at my screen and concentrate on this column, I’m absolutely sure that I’m also experiencing the whole visual field—the chair, the light, the view out my window. But for all I know, invisible gorillas may be all around me.
Many philosophical arguments about consciousness are based on the apparently certain and obvious intuitions we have about our experience. This includes, of course, arguments that consciousness just couldn’t be explained scientifically. But scientific experiments like this one show that those beautifully clear and self-evident intuitions are really incoherent and baffling. We will have to wrestle with many other confusing, tricky, elusive gorillas before we understand how consciousness works.
To parents, there is no force known to science as powerful as the repulsion between children and vegetables.
Of course, just as supercooling fluids can suspend the law of electrical resistance, melting cheese can suspend the law of vegetable resistance. This is sometimes known as the Pizza Paradox. There is also the Edamame Exception, but this is generally considered to be due to the Snack Uncertainty Principle, by which a crunchy soybean is and is not a vegetable simultaneously. But when melty mozzarella conditions don’t apply, the law of vegetable repulsion would appear to be as immutable as gravity, magnetism or the equally mysterious law of child-godawful mess attraction.
In a new paper in Psychological Science, however, Sarah Gripshover and Ellen Markman of Stanford University have shown that scientists can overcome the child-vegetable repulsive principle. Remarkably, the scientists in question are the children themselves. It turns out that, by giving preschoolers a new theory of nutrition, you can get them to eat more vegetables.
My colleagues and I have argued that very young children construct intuitive theories of the world around them (my first book was called “The Scientist in the Crib”). These theories are coherent, causal representations of how things or people or animals work. Just like scientific theories, they let children make sense of the world, construct predictions and design intelligent actions.
Preschoolers already have some of the elements of an intuitive theory of biology. They understand that invisible germs can make you sick and that eating helps make you healthy, even if they don’t get all the details. One little boy explained about a peer, “He needs more to eat because he is growing long arms.”
The Stanford researchers got teachers to read 4- and 5-year olds a series of story books for several weeks. The stories gave the children a more detailed but still accessible theory of nutrition. They explained that food is made up of different invisible parts, the equivalent of nutrients; that when you eat, your body breaks up the food into those parts; and that different kinds of food have different invisible parts. They also explained that your body needs different nutrients to do different things, so that to function well you need to take in a lot of different nutrients.
In a control condition, the teachers read children similar stories based on the current United States Department of Agriculture website for healthy nutrition. These stories also talked about healthy eating and encouraged it. But they didn’t provide any causal framework to explain how eating works or why you should eat better.
The researchers also asked children questions to test whether they had acquired a deeper understanding of nutrition. And at snack time they offered the children vegetables as well as fruit, cheese and crackers. The children who had heard the theoretical stories understood the concepts better. More strikingly, they also were more likely to pick the vegetables at snack time.
We don’t yet know if this change in eating habits will be robust or permanent, but a number of other recent studies suggest that changing children’s theories can actually change their behavior too.
A quick summary of 30 years of research in developmental psychology yields two big propositions: Children are much smarter than we thought, and adults are much stupider. Studies like this one suggest that the foundations of scientific thinking—causal inference, coherent explanation, and rational prediction—are not a creation of advanced culture but our evolutionary birthright.
Last week, I made a pilgrimage to Dove Cottage—a tiny white house nestled among the meres and fells of England's Lake District. William Wordsworth and his sister Dorothy lived there while they wrote two of my favorite books: his "Lyrical Ballads" and her journal—both masterpieces of Romanticism.
The Romantics celebrated the sublime—an altered, expanded, oceanic state of consciousness. Byron and Shelley looked for it in sex. Wordsworth's friends, Coleridge and De Quincey, tried drugs (De Quincey's opium scales sit next to Dorothy's teacups in Dove Cottage).
But Wordsworth identified this exalted state with the very different world of young children. His best poems describe the "splendor in the grass," the "glory in the flower," of early childhood experience. His great "Ode: Intimations of Immortality From Recollections of Early Childhood" begins: There was a time when meadow, grove, and stream, / The earth, and every common sight, / To me did seem / Apparell'd in celestial light, / The glory and the freshness of a dream.
This picture of the child's mind is remarkably close to the newest scientific picture. Children's minds and brains are designed to be especially open to experience. They're unencumbered by the executive planning, focused attention and prefrontal control that fuels the mad endeavor of adult life, the getting and spending that lays waste our powers (and, to be fair, lets us feed our children).
This makes children vividly conscious of "every common sight" that habit has made invisible to adults. It might be Wordsworth's meadows or the dandelions and garbage trucks that enchant my 1-year-old grandson.
It's often said that the Romantics invented childhood, as if children had merely been small adults before. But scientifically speaking, Wordsworth discovered childhood—he saw children more clearly than others had. Where did this insight come from? Mere recollection can't explain it. After all, generations of poets and philosophers had recollected early childhood and seen only confusion and limitation.
I suspect it came at least partly from his sister Dorothy. She was an exceptionally sensitive and intelligent observer, and the descriptions she recorded in her journal famously made their way into William's poems. He said that she gave him eyes and ears. Dorothy was also what the evolutionary anthropologist Sarah Hrdy calls an "allomother." All her life, she devotedly looked after other people's children and observed their development.
In fact, when William was starting to do his greatest work, he and Dorothy were looking after a toddler together. They rescued 4-year-old Basil Montagu from his irresponsible father, who paid them 50 pounds a year to care for him. The young Wordsworth earned more as a nanny than as a poet. Dorothy wrote about Basil—"I do not think there is any pleasure more delightful than that of marking the development of a child's faculties." It could be the credo of every developmental psychologist.
There's been much prurient speculation about whether Dorothy and William slept together. But very little has been written about the undoubted fact that they raised a child together.
For centuries the people who knew young children best were women. But, sexism aside, just bearing and rearing children was such overwhelming work that it left little time for thinking or writing about them, especially in a world without birth control, vaccinations or running water.
Dorothy was a thinker and writer who lived intimately with children but didn't bear the full, crushing responsibility of motherhood. Perhaps she helped William to understand children's minds so profoundly and describe them so eloquently.
Are human beings born good and corrupted by society or born bad and redeemed by civilization? Lately, goodness has been on a roll, scientifically speaking. It turns out that even 1-year-olds already sympathize with the distress of others and go out of their way to help them.
But the most recent work suggests that the origins of evil may be only a little later than the origins of good.
New studies show that even young children discriminate.
Our impulse to love and help the members of our own group is matched by an impulse to hate and fear the members of other groups. In "Gulliver's Travels," Swift described a vicious conflict between the Big-Enders, who ate their eggs with the big end up, and the Little-Enders, who started from the little end. Historically, largely arbitrary group differences (Catholic vs. Protestant, Hutu vs. Tutsi) have led to persecution and even genocide.
When and why does this particular human evil arise? A raft of new studies shows that even 5-year-olds discriminate between what psychologists call in-groups and out-groups. Moreover, children actually seem to learn subtle aspects of discrimination in early childhood.
In a recent paper, Yarrow Dunham at Princeton and colleagues explored when children begin to have negative thoughts about other racial groups. White kids aged 3 to 12 and adults saw computer-generated, racially ambiguous faces. They had to say whether they thought the face was black or white. Half the faces looked angry, half happy. The adults were more likely to say that angry faces were black. Even people who would hotly deny any racial prejudice unconsciously associate other racial groups with anger.
But what about the innocent kids? Even 3- and 4-year-olds were more likely to say that angry faces were black. In fact, younger children were just as prejudiced as older children and adults.
Is this just something about white attitudes toward black people? They did the same experiment with white and Asian faces. Although Asians aren't stereotypically angry, children also associated Asian faces with anger. Then the researchers tested Asian children in Taiwan with exactly the same white and Asian faces. The Asian children were more likely to think that angry faces were white. They also associated the out-group with anger, but for them the out-group was white.
Was this discrimination the result of some universal, innate tendency or were preschoolers subtly learning about discrimination? For black children, white people are the out-group. But, surprisingly, black children (and adults) were the only ones to show no bias at all; they categorized the white and black faces in the same way. The researchers suggest that this may be because black children pick up conflicting signals—they know that they belong to the black group, but they also know that the white group has higher status.
These findings show the deep roots of group conflict. But the last study also suggests that somehow children also quickly learn about how groups are related to each other.
Learning also was important in another way. The researchers began by asking the children to categorize unambiguously white, black or Asian faces. Children began to differentiate the racial groups at around age 4, but many of the children still did not recognize the racial categories. Moreover, children made the white/Asian distinction at a later age than the black/white distinction. Only children who recognized the racial categories were biased, but they were as biased as the adults tested at the same time. Still, it took kids from all races a while to learn those categories.
The studies of early altruism show that the natural state of man is not a war of all against all, as Thomas Hobbes said. But it may quickly become a war of us against them.
There's been a lot of talk about nature in the gay-marriage debate. Opponents point to the "natural" link between heterosexual sex and procreation. Supporters note nature's staggering diversity of sexual behavior and the ubiquity of homosexual sex in our close primate relatives. But, actually, gay marriage exemplifies a much more profound part of human nature: our capacity for cultural evolution.
The birds and the bees may be enough for the birds and the bees, but for us it's just the beginning.
Culture is our nature; the evolution of culture was one secret of our biological success. Evolutionary theorists like the philosopher Kim Sterelny, the biologist Kevin Laland and the psychologist Michael Tomasello emphasize our distinctively human ability to transmit new information and social practices from generation to generation. Other animals have more elements of culture than we once thought, but humans rely on cultural transmission far more than any other species
Still, there's a tension built into cultural evolution. If the new generation just slavishly copies the previous one this process of innovation will seize up. The advantage of the "cultural ratchet" is that we can use the discoveries of the previous generation as a jumping-off point for revisions and discoveries of our own.
Man may not be The Rational Animal, but we are The Empirical Animal—perpetually revising what we do in the light of our experience.
Studies show that children have a distinctively human tendency to precisely imitate what other people do. But they also can choose when to imitate exactly, when to modify what they've seen, and when to try something brand new.
Human adolescence, with its risk-taking and exploration, seems to be a particularly important locus of cultural innovation. Archaeologists think teenagers may have been the first cave-painters. We can even see this generational effect in other primates. Some macaque monkeys famously learned how to wash sweet potatoes and passed this skill to others. The innovator was the equivalent of a preteen girl, and other young macaques were the early adopters.
As in biological evolution, there is no guarantee that cultural evolution will always move forward, or that any particular cultural tradition or innovation will prove to be worth preserving. But although the arc of cultural evolution is long and irregular, overall it does seem to bend toward justice, or, at least, to human thriving.
Gay marriage demonstrates this dynamic of tradition and innovation in action. Marriage has itself evolved. It was once an institution that emphasized property and inheritance. It has become one that provides a way of both expressing and reinforcing values of commitment, loyalty and stability. When gay couples want marriage, rather than just civil unions, its precisely because they endorse those values and want to be part of that tradition.
At the same time, as more and more people have courageously come out, there have been more and more gay relationships to experience. That experience has led most of the millennial generation to conclude that the link between marital tradition and exclusive heterosexuality is unnecessary, indeed wrong. The generational shift at the heart of cultural evolution is especially plain. Again and again, parents report that they're being educated by their children.
It's ironic that the objections to gay marriage center on child-rearing. Our long protected human childhood, and the nurturing and investment that goes with it, is, in fact, exactly what allows social learning and cultural evolution. Nurture, like culture, is also our nature. We nurture our children so that they can learn from our experience, but also so that subsequent generations can learn from theirs.
Marriage and family are institutions designed, at least in part, to help create an autonomous new generation, free to try to make better, more satisfying kinds of marriage and family for the generations that follow.
Babies and children sleep a lot—12 hours a day or so to our eight. But why would children spend half their lives in a state of blind, deaf paralysis punctuated by insane hallucinations? Why, in fact, do all higher animals surrender their hard-won survival abilities for part of each day?
Children themselves can be baffled and indignant about the way that sleep robs them of consciousness. We weary grown-ups may welcome a little oblivion, but at nap time, toddlers will rage and rage against the dying of the light.
Part of the answer is that sleep helps us to learn. It may just be too hard for a brain to take in the flood of new experiences and make sense of them at the same time. Instead, our brains look at the world for a while and then shut out new input and sort through what they have seen.
Children learn in a particularly profound way. Some remarkable experiments show that even tiny babies can take in a complex statistical pattern of data and figure out the rules and principles that explain the pattern. Sleep seems to play an especially important role in this kind of learning.
In 2006, Rebecca Gómez and her colleagues at the University of Arizona taught 15-month-old babies a made-up language. The babies listened to 240 "sentences" made of nonsense words, like "Pel hiftam jic" or "Pel lago jic." Like real sentences, these sentences followed rules. If "pel" was the first word, for instance, "jic" would always be the third one.
Half the babies heard the sentences just before they had a nap, and the other half heard them just after they woke up, and they then stayed awake.
Four hours later, the experimenters tested whether the babies had learned the "first and third" rule by seeing how long the babies listened to brand-new sentences. Some of the new sentences followed exactly the same rule as the sentences that the babies had heard earlier. Some also followed a "first and third" rule that used different nonsense words.
Remarkably, the babies who had stayed awake had learned the specific rules behind the sentences they heard four hours before—like the rule about "pel" and "jic." Even more remarkably, the babies who had slept after the instruction seemed to learn the more abstract principle that the first and third words were important, no matter what those words actually were.
Just this month, a paper by Ines Wilhelm at the University of Tübingen and colleagues showed that older children also learn in their sleep. In fact, they learn better than grown-ups. They showed 8-to-11-year-olds and adults a grid of eight lights that lit up over and over in a particular sequence. Half the participants saw the lights before bedtime, half saw them in the morning. After 10 to 12 hours, the experimenters asked the participants to describe the sequence. The children and adults who had stayed awake got about half the transitions right, and the adults who had slept were only a little better. But the children who had slept were almost perfect—they learned substantially better than either group of adults.
There was another twist. While the participants slept, they wore an electronic cap to measure brain activity. The children had much more "slow-wave sleep" than the adults—that's an especially deep, dreamless kind of sleep. And both children and adults who had more slow-wave sleep learned better.
Children may sleep so much because they have so much to learn (though toddlers may find that scant consolation for the dreaded bedtime). It's paradoxical to try to get children to learn by making them wake up early to get to school and then stay up late to finish their homework.
Colin Powell reportedly said that on the eve of the Iraq war he was sleeping like a baby—he woke up every two hours screaming. But really sleeping like a baby might make us all smarter.