Jump to ContentJump to Main Navigation
How Children Invented HumanityThe Role of Development in Human Evolution$

David F. Bjorklund

Print publication date: 2020

Print ISBN-13: 9780190066864

Published to Oxford Scholarship Online: October 2020

DOI: 10.1093/oso/9780190066864.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: null; date: 18 September 2021

Evolutionary Mismatches in the Development of Today’s Children

Evolutionary Mismatches in the Development of Today’s Children

(p.220) 7 Evolutionary Mismatches in the Development of Today’s Children
How Children Invented Humanity

David F. Bjorklund

Oxford University Press

Abstract and Keywords

Differences between modern and ancient environments sometimes cause evolutionary mismatches. Many children are following an exceptionally slow life history strategy and as a result are safer and engage in less risky behavior than in the past (safetyism), although many are more psychologically fragile and less resilient. Excessive use of social media is associated with poorer physical and mental health, including increases in depression, anxiety, and loneliness. Today’s adolescents display hyper-individualism that emphasizes personal freedom and achievement. The relative lack of social bonding in individualistic societies is associated with increases in loneliness and mental health problems and can sometimes be exaggerated by social media use. Modern schools represent a mismatch with the environments of our forechildren. Similarly, young children’s exposure to digital media may have detrimental effects on subsequent learning and psychological development. Parents and educators can identify problems associated with evolutionary mismatches and design environments that make the lives of children happier.

Keywords:   mismatch hypothesis, safetyism, social media, supernormal stimulus, hyper-individualism, discovery learning, guided play, biologically primary abilities, biologically secondary abilities, video deficit

Lenore Skenazy got her 15 minutes of fame (or perhaps infamy) in 2008, pilloried by the press and many parents as “America’s Worst Mom.” What did Skenazy do to warrant such scorn? Did she leave her children in a hot car while she played slot machines in a casino or lock them in a dark cellar as punishment for not doing their chores? No, her crime was that she permitted her 9-year-old son Izzy to ride the New York City subway by himself. Izzy was equipped with a MetroCard, a subway map, a 20-dollar bill, and some quarters to make a phone call if needed. The 45-minute trip was uneventful but joyful for Izzy. Then the journalist-mother wrote a column about her son’s adventure and her reasons for letting him have it in The New York Sun,1 and the accusations of negligence and child abuse poured in.

Why the hubbub over what, to an earlier generation of New York City children, would be a mundane experience? Because in the decades prior to Izzy’s wild ride, Americans had become obsessed with their children’s safety. This was prompted, in part, by reports of kidnapped and murdered children, with 24-hour news stations informing people in Portland, Maine, what awful thing might have happened to a child in Portland, Oregon. Never mind that child kidnappings and murders by nonrelatives are statistically rare; they do happen, and when they do, no matter where in the world they occur, we hear about it. The message is: How would you feel if your child was abducted while walking home from school alone? Americans had become obsessed with what First Amendment lawyer Greg Lukianoff and social psychologist Jonathan Haidt in their book The Coddling of the American Mind called safetyism—an excessive concern for the physical and emotional safety of children.2 In the years since Skenazy’s indictment as “America’s Worst Mom,” many people have come to her defense or realized that they must give their children more freedom. Skenazy herself wrote a book, runs a popular blog (both titled Free-Range Kids), and cofounded the group Let Grow: Future-Proofing Our Kids and Country, all aimed at fighting the culture of overprotection. She’s not (p.221) alone in this pursuit, but it may seem as if she’s fighting an uphill battle, for in many ways the trend toward safetyism has increased over the past decade.

I was especially attentive to the controversy that Skenazy provoked in 2008 because I had published a book a year earlier that focused on the many aspects of childhood in the early 21st century that were at odds with how our ancestors grew up, and this was causing problems for contemporary children. At that time, my major concern was that adults were rushing children through a childhood that has purposes of its own, often by using developmentally inappropriate techniques for educating children. I viewed adults’ overscheduling and overprotection of children as reducing free play, which is an important component of healthy development. I was less attentive to the possibility that Americans were infantilizing their children by prolonging their period of dependency. With the benefit of hindsight and following a decade of “a smartphone in every teenager’s pocket,” I now believe that the development of many contemporary children is being both rushed and delayed—rushed by academic acceleration that can turn children off to the natural joy of learning and by exposure to adult content and issues, and delayed because of safetyism, trying to protect children from the slings and arrows of everyday experiences and as a result postponing independence and reducing resilience. Poor psychological adjustment can result in either case, both due to a mismatch between children’s evolved adaptations and their current environments.

I discussed the mismatch hypothesis briefly in Chapter 1. Our human ancestors evolved in very different environments from those of today, and as a result some adaptations shaped to deal with Stone Age culture may not be good matches for contemporary children. Early Homo sapiens lived in small groups as nomadic hunter-gatherers. As discussed in Chapter 2, most hunter-gatherer communities were neontocracies, valuing children and giving them a good deal of autonomy in their daily lives. Babies stayed close to their mothers until weaned, around 2 or 3 years of age, and then spent much of their time playing with other children. Little was expected of them in the way of chores; there was no formal schooling and little formal instruction. Through observation and play, they learned how to be hunter-gatherers. There were really no other career opportunities for them.

If the childhoods of hunter-gatherers should be the model for understanding modern children and their development, as many scholars believe, life for children in WEIRD (Western, educated, industrial, rich, democratic) societies is vastly different from those of our forechildren. Formal (p.222) education—with groups of same-age children sitting quietly at desks most of the day, being instructed by an unfamiliar adult on topics of no immediate survival value—is an evolutionarily novel phenomenon, and it is little wonder that many children find school boring and burdensome, tolerable only because of the social interaction they have with their peers. But the lives of children and adults have been diverging from those of hunter-gatherers for at least 10,000 years, starting with the advent of agriculture and animal domestication and continuing through the establishment of cities, nations, systems of justice, and corporations. People also have had to adjust to advances in material and intellectual culture, from the wheel and metallurgy to writing, mathematics, and the Internet. One amazing thing about our species is that we, and especially our children, have the neural, cognitive, and behavioral plasticity to adapt to such changes—to deal with the mismatch between the environments in which our ancestors evolved and current ones. We saw in Chapter 2 that many traditional cultures—gerontocracies—treat children harshly, valuing them for their economic contribution to the family, very different from both ancestral hunter-gatherer environments and our own. Yet such cultures still manage to produce successful and reproductive adults. So does our culture, but this does not mean that adapting to environments highly different from those of our ancestors does not have psychological costs. Perhaps we can better construct our ecologies to take advantage of the fruits of modern life while minimizing the consequences of the mismatch between current and ancient environments.

Modern human culture is full of mismatches. Evolutionary mismatches occur when there is an adaptive lag such that the ancient environment in which an adaptation evolved changed more rapidly than the once-functional adaptation. In particular, human cultural change outstrips human biological change, causing many mismatches. For example, our preference for sweet and fatty foods evolved in an environment when nutritious meals had to be caught, dug out of the ground, or picked from fortuitously found trees and bushes. Because of the recent prevalence of fast-food restaurants and supermarkets, these evolved preferences for high-caloric food put many contemporary people at risk for diabetes and obesity. Modernity, in general, has resulted in substantial changes in how people make a living, spend their time, raise their children, and relate to one another, and some scholars have argued that the high levels of depression and other mental-health problems suffered by contemporary people is the consequence. My concern here is not with the entire panoply of mismatches between the environments of our (p.223) ancient ancestors and those of WEIRD nations today but, rather, with those mismatches that are mainly associated with a particular time in development. For example, hunter-gatherer infants, and presumably our ancient ancestors, were nearly always in close contact with their mothers or other caregivers and were breastfed on demand for the first 2 or 3 years of life. This is very different from the practices of WEIRD societies, and some people argue that this produces a mismatch with infants’ evolved needs, having consequences for emotional and social development.3

Natural selection has operated at all life stages, such that children are well adapted to the particular developmental niche in which they live. When environments change, including cultural environments, mismatches can occur. I focus here on mismatches in two stages of life, adolescence and childhood. Adolescence extends from the onset of puberty until young adulthood, although it is impossible to set specific ages for this stage. It is a time when young people seek greater independence from their parents and develop their adult identities. They are particularly sensitive to social relations, risk-taking, and sensation seeking, and this is accompanied by neurological changes.4 Three cultural changes that cause mismatches with adolescent development are (1) a prolongation of dependency, fostered in part by safetyism; (2) the widespread use of social media; and (3) hyper-individualism. In contrast, childhood, following anthropologist Barry Bogin’s classification of children between about 3 and 7 years of age,5 represents a time of much learning. Children in the past seemingly educated themselves, using their evolved social-learning abilities to acquire the technologies and social conventions of their culture, mostly through play with other children. Formal schooling in WEIRD countries is vastly different from the environments in which our ancestors learned, and these mismatches may be especially consequential during childhood.

I need to make clear at the beginning of this chapter that I am not arguing that a mismatch is always a bad thing—that any deviation from ancestral environments is “bad” and living as much as we can as our ancestors did is “good.” This is an example of the naturalistic fallacy, mentioned briefly in Chapter 2, which is the false belief that if something is evolved (or is “natural”) it must be “good,” or at least accepted as part of human nature. Many of the evolutionary mismatches are the result of advancements in modern technology that make life more worth living (or permitting us to live a longer life), and we must keep in mind the positive features of these mismatches as well as their negative ones.

(p.224) How Slow Can You Go?

More than any other mammal, Homo sapiens follow a slow life history strategy, investing heavily in a few slow-developing offspring. (See discussion of life history theory in Chapter 2.) There is, however, variation in the speed that individuals move through life, with differences in the resources and support available when growing up resulting in children adjusting their life course trajectory. Compared with children growing up in resource-rich and predictable environments, children living in harsh and unpredictable environments develop faster, engage in risky behavior, have sex earlier and with more partners, and, as adults, invest less in more offspring. They essentially develop an opportunistic lifestyle. Children growing up in more favorable and predictable environments show an opposite pattern, developing a futuristic lifestyle. Natural selection has been sensitive to the early environments of youth, and children have enough plasticity to adapt aspects of their development to an anticipated future (current environments being the best predictor of future environments).

One way a mismatch can occur is when children living in harsh and unpredictable environments follow fast life history strategies, which may have been adaptive in ancient ecologies but result in youth engaging in dangerous and sometimes delinquent behaviors in many of today’s societies. Such behavior may still be adaptive in a Darwinian sense, while putting children and adolescents at odds with economic and criminal-justice systems in first-world nations. I discussed in Chapter 2 how life history theory can be used to understand and possibly deal with the risky and sometimes violent behaviors of adolescents and young adults who grow up in less-than-optimal environments, and I won’t revisit that issue here. Perhaps a more pressing issue is the mismatch caused by children following an increasingly slow life history strategy, which typifies many children living in WEIRD societies today.

Following a (Really) Slow Life History Strategy

In many affluent cultures, life for children has never been better or safer. Their parents had relatively few children, fostered in large part by reliable birth control (a mismatch itself from ancient environments), as well as the resources to invest intensely in them. Low birth rates and substantial investments in children are not limited to North America but are found worldwide. The fertility (p.225) rate (number of children per woman in her childbearing years) in 2019 throughout Europe and in many South American and Asian countries was less than 2 per woman, below the replacement rate.6 Once born, life expectancy for children is high, bolstered by government systems that ensure good public health (clean water, good sanitation) and availability of high-quality medical services. Granted, these services are not equally available to all citizens of developed countries, but the likelihood of surviving to adulthood for children born in WEIRD countries today is greater than it’s ever been. Such high investments by parents should cause children to follow a slow life history strategy, which most do, and this is wholly consistent with evolutionary logic. So where’s the mismatch? The mismatch is that many children today are following an exaggeratedly slow life history strategy, promoted by their safety-conscious parents, producing young adults less prepared for adult life than previous generations.


I earlier defined safetyism as an excessive concern for the physical and emotional safety of children. According to Lukianoff and Haidt (2018), safetyism began in the 1980s, peaked in the 1990s, especially among educated parents, and continues at high levels today in the United States and other WEIRD countries. Whereas children of earlier generations were often encouraged to “get out of the house and go play,” presumably with neighborhood children (Skenazy’s free-range kids), this is foreign for many of today’s youth, who instead have formal play dates, made and supervised by their parents. As a mundane example, consider a typical school day when I was a child and one for many children today. In the morning I waited at the end of a neighbor’s driveway with other children for the school bus. The only time we’d have a parent join us was for the first few days of school for a kindergarten child. After school I walked home alone from the bus stop, then after a snack and change of clothes, I left the house and played with other kids in the neighborhood until supper time. Today, when I drive from home to work in the morning, I pass several school bus stops, with children, some in middle school, waiting in cars with a parent, usually one child per car. When I do see children mingling together at a bus stop, there is always at least one car, and usually several, with a watchful mom inside. This scene is replayed in the afternoon, as the yellow school bus unloads children to parents waiting in their cars.

(p.226) In place of free play, many children today, beginning during the preschool years, take part in organized activities in the afternoons, usually with a parent nearby, from tumbling classes and dance lessons, to karate and tee-ball. Children keep plenty busy, but almost always at the direction of some adult, be it a parent, teacher, or coach. Free play or unorganized sports, with children making and enforcing the rules in the absence of an adult, is far less frequent than it was in decades past, but children are safe, with an adult always supervising the activities.

Lukianoff and Haidt see this emphasis on safety extending far beyond childhood into adolescence and young adulthood. They point out that in recent years colleges and universities have become overly concerned with the emotional safety of their young-adult students, who feel unsafe listening to speakers who espouse opinions different from their own and believe that the role of college is to provide them with a safe space rather than one that challenges them emotionally and intellectually. For example, in November 2019 a formal resolution of impeachment was brought against the University of Florida’s student body president, primarily for using student fees to bring a speaker (Donald Trump, Jr.) to campus to promote a particular political party (which is against University rules), but also because having such a speaker “endangered students marginalized by the speaker’s white nationalist supporters.” Being exposed to the views of a speaker’s supporters was not viewed as an expression of free speech but, rather, as something that would make some students uncomfortable (“endangered”) and, thus, a reason for impeachment.

These attitudes seem to be more common of students at more elite universities, although I’ve witnessed occasional complaints about controversial speakers or lecture topics from students at the public university where I teach. The extent of the emphasis on “safety” on college campuses really hit home for me when I heard about the brouhaha at Yale over what constitutes an appropriate Halloween costume. College administrators had cautioned students about wearing costumes that may reflect cultural appropriation or may offend fellow students. An Anglo student wearing a sombrero, for example, may be viewed as offensive to some Latin-American students. In response to administrators’ concerns, Erika Christakis, a lecturer and house master of one of the residential colleges at Yale, wrote an email urging students to think critically about the administrators’ guidelines on costumes to avoid at Halloween:

I don’t wish to trivialize genuine concerns about cultural and personal representation . . . I know that many decent people have proposed guidelines (p.227) on Halloween costumes from a spirit of avoiding hurt and offense. I laud those goals, in theory, as most of us do. But in practice, I wonder if we should reflect more transparently, as a community, on the consequences of an institutional (which is to say: bureaucratic and administrative) exercise of implied control over college students.7

Christakis’s call for what many would consider to be a reasonable request for dialogue, was met with fury and demands by many students, faculty, and deans that she be fired. Perhaps Christakis was wrong, and the societal climate requires greater sensitivity to people’s feeling of cultural identity, but surely this can be an issue for discussion and not one so offensive that anyone who proposes it should be fired.

Lukianoff and Haidt view the oversensitivity of Yale and other college students as a result of the emphasis their parents and teachers have placed on the their physical and emotional safety in all aspects of their lives, and they believe that there are larger consequences to safetyism:

When children are raised in a culture of safetyism, which teaches them to stay “emotionally safe” while protecting them from every imaginable danger, it may set up feedback loops; kids become more fragile and less resilient, which signals to adults that they need more protection, which then makes them even more fragile and less resilient.8

There are other consequences to safetyism, some of them positive. Consistent with life history theory, children who follow a slow strategy should be more apt to take a futuristic as opposed to an opportunistic perspective on life and thus engage in less risky behavior. In other words, parents’ and teachers’ emphasis on keeping children safe should result in the children themselves emphasizing their own safety, in essence continuing to practice safetyism in their own lives. This, indeed, seems to be the case, and this has been thoroughly documented by social psychologist Jean Twenge in her 2017 book iGen: Why Today’s Super-connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happyand Completely Unprepared for Adulthood. Twenge defines iGen’ers (referred to by others as Generation Z, or sometimes “zoomers”) as being born between 1995 and 2012. Not only are they the generation that experienced perhaps the height of safetyism, but they are also “the first generation to enter adolescence with smartphones already in their hands.”9 (More on this later.)

(p.228) Twenge presents data culled from several long-term surveys of American teenagers and young adults, documenting secular changes in important behaviors, in some cases over the last five decades. These surveys, asking participants in 2012 or 2015, for example, the same questions at the same age as participants in 1976 or 1991, provide a look at how adolescent and young-adult behavior has changed between members of different generations (iGen, 1995–2012; Millennials, 1980–1994; GenX, 1965–1979). Twenge includes in her book over one hundred graphs displaying age changes in important behaviors. Some behaviors show gradual change in frequency over the generations, whereas others show marked changes associated with iGen.

The consequences of safetyism on iGen’ers are seen in a broad range of behaviors. Compared with earlier generations, iGen’ers are less likely to get together with their friends, go to parties, or spend time out of the house without their parents; they are less likely to date, have sex, and have children out of wedlock; iGen’ers are less likely to have jobs or otherwise make their own money; they are less likely to drive, are safer drivers when they do, and are less likely to get in a car driven by someone who has been drinking; they are less likely to get into fights, be involved in sexual assaults, or fight with their parents; iGen’ers spend more time alone than teenagers in past generations; as teenagers, they are less likely to drink, especially binge-drink, although they use marijuana as much as Millennials did because they think it is safe. And many iGen’ers apparently feel the same about COVID-19, believing it is only a “flu on steroids” that will do them little harm, accounting for why many ignored social-distancing guidelines.

Consider, for example, changes in four behaviors (having a driver’s license, having tried alcohol, ever dated, worked for pay during the school year) for 12th graders between the years 1976 and 2016. Twenge reported that there has been a steady decline in each of these behaviors, accelerating around the year 2000. For example, compared with high school seniors in the late 1970s, in which about 85% had ever dated and about 75% had worked for pay during the school year, these values were both around 55% for 12th graders in 2016. Similar declines were found for having tried alcohol (about 83% in 1994 and 62% in 2016) and having a driver’s license (about 86% in 1976 and 77% in 2016). This downward trend did not begin with iGen but reflects a gradual change over the last 40 years, with the iGen’ers holding the current anchor points. Taken together, the picture is one that most parents of teenagers had hoped for: iGen teens are doing what their parents want them to do. The rebelliousness of earlier generations has receded. According to (p.229) Twenge, “iGen doesn’t rebel against their parents’ overprotection—instead they embrace it.”10

At first blush, it may be hard to see any downside in this pattern: teenagers are more cautious and safety conscious than ever before, displaying more responsible and adult-like behavior than previous generations of adolescents. But this is only partly true. Their reduced risk-taking is accompanied by taking less responsibility for their lives than did previous generations of teenagers, not more. A driver’s license and money of one’s own represent new levels of freedom for many adolescents, but more and more teens are happy to have their parents chauffer them around and to ask mom or dad for cash or their credit card when they want something. Rather than becoming adult-like sooner, they are remaining children longer. According to Twenge, “Instead of resenting being treated like children, iGen’ers wish they could stay children for longer.”11

Given what we know about life history theory, this may be expected and a good thing. Why take unnecessary risks and take on adult responsibilities when it’s not necessary? The answer, of course, is that the risk-taking in adolescence, so prevalent in generations past, involves not simply making bad decisions, but also a preparation for independence and adulthood. Risk has gotten a bad name. Most of the time when we speak of risk with respect to adolescents we consider only the downside—the potential damage that unprotected sex or drinking can produce. But risk not only has costs but also potential benefits. Adolescents who take risks and succeed (or even fail) have gained valuable experience and sometimes increase their status in the peer group. Risk-taking can include a wide range of novelty-seeking behaviors, such as joining the military, traveling cross-country, leaving home to attend college, taking a gap year, or trying one’s luck in the big city, each of which has potential benefits. Sociologist Howard Sercombe has proposed that developing a sense of agency—becoming an actor in one’s life rather than a passive observer of it—is the primary drive of adolescence,12 and this seems something that many contemporary teenagers appear to be postponing in their quest, supported by their parents, to prolong childhood.

I am not arguing that we should encourage teens to drink more, have more sex, and text while driving. Having been a parent and grandparent of teenagers, I see the increased safety that the current trends reflect as a good thing. But these need not be coupled with a delay of the development of agency and independence, as they seem to be for many adolescents and young adults in the United States and other countries today. Humans’ long road to adulthood played a critical role in evolution, and children’s sensitivity (p.230) to early environmental conditions to modify their rate of development in anticipation of future environments continues to play an adaptive role in contemporary children. Although extending some aspects of immaturity well into traditional adulthood can have important benefits (for example, getting advanced education for some occupations), postponing the development of agency, thwarting novelty-seeking, and remaining dependent on parents and other “real” adults to make decisions is, I believe, deleterious. It is perhaps a bit ironic that evolutionary mechanisms, as reflected in the enhanced plasticity associated with childhood and adolescence, coupled with sensitivity to social and economic contexts, can serve to prolong immaturity to an extent that it can actually be maladaptive to individuals. This seems to be the case for many children in WEIRD cultures today; this tendency may be bolstered in large part due to children and adolescents being immersed in the truly evolutionarily novel phenomenon of social media.

Matches and Mismatches with Social Media

I doubt if it would come as a surprise to any reader of this book that we are currently in the midst of a cultural revolution. I’m referring not to any political revolt, but to the digital revolution, beginning with desktop computers and email in the 1980s and continuing through the first two decades of the 21st century with the Internet, Google, computer tablets, smartphones, and social media. Platforms such as Facebook, Twitter, Snapchat, and Instagram (which could be outdated by the time this book is published) consume the time and attention of literally billions of people each day, while having computers and access to speedy WiFi service have become necessities for a proper education and quality of life. Later in this chapter I will examine how young children’s ways of learning may not match the formats used in modern technology. In this section I focus on how children’s and adolescents’ use of social media reflects both a match and mismatch from evolved adaptations.

Social Media Matches Adolescents’ Drive for Social Interaction and Belonging

Perhaps I haven’t said it enough throughout this book, but humans are a highly social species. The previous chapter focused on how humans’ unique (p.231) social-cognitive abilities evolved from those of our common ancestor with chimpanzees as a result of changes in great ape ontogeny. The emphasis in that chapter was on the evolved social capabilities of infants and young children. These same adaptations, developed in the first 5 or 6 years of life, are subsequently used to navigate the social worlds of childhood and adolescence. Author Judith Harris proposed that human social behavior is predicated on four evolved adaptations that humans share with other primates: group affiliation and in-group favoritism; fear of and hostility toward strangers; within-group status seeking; and the seeking and establishment of close dyadic relationships, or friendships.13 We saw in Chapter 6 that in-group favoritism and out-group hostility are found early in childhood, and the importance of friends and one’s place in a peer group during middle childhood and adolescence have been recognized for many years (perhaps millennia) and have been the topic of much psychological study.14

Once children start school (or to begin to play in peer groups in unschooled cultures), they spend more time with other children and are increasingly influenced by them, with such influence peaking in mid-adolescence.15 This is illustrated in a study by developmental psychologist Lisa Knoll and her colleagues in which people ranging from 8 to 59 years of age were first asked to rate the riskiness of a variety of everyday situations, such as crossing a street while texting, driving without a seatbelt, or climbing on a roof. Following this, the participants were told how either teenagers or adults rated the riskiness of the same behaviors, and then the participants were asked to rate the situations again. Would people change their assessment of risk based on the opinion of others, and, if so, would they be more influenced by what the teenagers or adults had to say? The subsequent ratings of children (8–11 years) and younger (19–25 years) and older adults (36–59 years) were more influenced by the opinions of adults, whereas the ratings of young adolescents (12–15 years) were more influenced by the opinion of fellow teens. Older teenagers (15–18 years) were influenced equally by the opinions of teenagers and adults.16

Adolescence is a time when social interactions and social approval (and rejection) are paramount to adolescents’ sense of self and self-worth. (In large part, it was adolescents’ and young adults’ need for social interaction that made social isolation during the COVID-19 pandemic so difficult for them, and why many young people ignored social-distancing guidelines when the economy reopened.) This is an evolved feature of adolescence, coupled with an increased tendency to engage in risky behavior. Much of that (p.232) risky behavior is done in the presence of peers or meant to enhance social approval. This realization caused neuroscientist Sarah-Jayne Blakemore to propose that adolescents’ greater propensity to take physical risks than either children or adults is driven in part by their avoidance of social risk.17 Better to risk the consequences of underage drinking than the social approbation of one’s peers. (I recall one teenager swallowing a live goldfish at a showing of the Rocky Horror Picture Show. When asked why he would do such a thing, his answer was that he didn’t want to look stupid in front of his peers.) Similarly, developmental psychologist Lawrence Steinberg and his colleagues view adolescent risk-taking as reflecting a competition between two developing brain systems: the socioemotional network, located primarily in the limbic system, and the cognitive-control network, governed primarily by the frontal lobes.18 In particular, the nucleus accumbens, the amygdala, and other areas of the limbic system, associated with reward and emotion, develop ahead of the prefrontal lobes, associated with higher-level cognition and the control of behavior (for example, planning, inhibiting some responses while activating others). This leads to what some researchers call a mismatch in maturation (see Figure 7.1), which may be responsible, in part, for the sensation-seeking and risk-taking behaviors and sometimes the poor decision-making typical of adolescence.19

Evolutionary Mismatches in the Development of Today’s Children

Figure 7.1. Average sensation seeking (top) and self-regulation (bottom) as a function of age across 11 different cultures. Gray shading denotes plateau or peak, and dashed lines indicate 95% confidence intervals.

Source: Steinberg et al., 2017, with permission.

This mismatch is illustrated in a study of risk-taking in adolescents (13 to 16 years old), young adults (average age = 19 years), and adults (average age = 37 years) during a video driving game.20 Participants played the game once while alone and again when friends were in the “car” with them. Interestingly, the number of crashes during the game was essentially the same for the three groups of participants (a little over one per session) when driving alone. In contrast, the number of crashes nearly tripled for the adolescents when driving with friends, doubled for young adults, and remained essentially unchanged for the adults. This pattern is similar to actual driving statistics for vehicular deaths. Teenagers are involved in proportionally more vehicle deaths per miles driven than are older adults, and the death rate increases the more people (usually other teens) are in the car: the rate of vehicular deaths for 16- and 17-year-old drivers more than doubles when there are three or more passengers in the car relative to when teens drive alone. In contrast, there is no relationship between vehicular deaths and the number of passengers for older drivers.21

I must emphasize that although this mismatch in brain development may be responsible, in part, for some of the problematic risky behaviors (p.233) of adolescence, it is not a reflection of a dysfunctional brain. Rather, it is a brain shaped by natural selection to be well suited to the tasks of adolescence. According to neuroscientist Jay Giedd, “the teen brain is not a broken or defective adult brain. It’s been exquisitely forged by the forces of our evolutionary history to be a very good teen brain. It’s different than children, and its different than adult, but it’s not broken.”22

(p.234) Until relatively recently, children made friends and enemies and strove for recognition and status in person. Although friends could stay in touch through letters or phone calls, such forms of communication supplemented face-to-face interaction, where “real” social life was conducted. With the advent of social media, this swiftly changed. Although some social media sites were online in the late 1990s, the first “big” site launched in 2003 (MySpace, which still had 50.6 million unique monthly visitors in 2019), with Facebook opening to anyone over 13 years of age in 2006. As of June 2020, Facebook had over 2.6 billion monthly active users worldwide.23 Add to these social media platforms like Twitter, WhatsApp, Instagram, YouTube, Messenger, and Snapchat, among many others, and one can get an idea of the immensity of social media’s presence in people’s lives today.

Use of social media skyrocketed, however, with the advent of smartphones. Smartphones had been around since the 1990s, first became widespread with the Blackberry, marketed for business use, and then exploded in 2007 with the introduction of the iPhone. Google’s Android phones came out a year later, and now nearly 3 billion people across the globe have smartphones, including 81% of the American population, with approximately 95% of American teens having access to a smartphone. (South Korea is the nation with the highest smartphone use at 95%.)24

These social media are used by people of all ages. The current United States president, a man in his early 70s, makes daily use of Twitter, and even a digital dinosaur like myself has a Facebook account, frequently sends and receives text messages, and occasionally uses WhatsApp to communicate with my European friends. But it is adolescents and young adults, and increasingly preteens, who are the heaviest users of social media, and the ones whom social scientists are most concerned about. According to surveys reviewed by Twenge in her 2017 book, the average American teenager spends six or more hours a day interacting with some screen (texting, social media, Internet, video games); more than 80% of 10th- and 12th-grade children use social media daily, with adolescent girls being heavier users than boys.25 Part of the reason why adolescents are often consumed by social media is because they grew up with it and use it intuitively. They are digital natives. I would guess that many older (40 years plus) readers of this book have more than once asked a teenager to help them use an app; configure a smartphone, computer, or television; or in some way assist them with operating digital technology.

Teenagers’ special fascination with social media is no mystery. Although all humans have evolved to be social creatures, natural selection has shaped (p.235) adolescents’ brains to be especially sensitive to social cues. Teenagers stay in touch with friends on social media and can even make new friends. They can receive social approval, often in the form of “likes,” the number of friends they have, or the number of visits their posts get. Social media can be a boon to shy or inhibited children and teens who find face-to-face interaction uncomfortable; adolescents can find other like-minded peers who share interests not found among their physically present friends. In short, social media match adolescents’ evolved social tendencies exceptionally well, exciting some of the same brain regions and neurochemicals that in-person social interaction does.26 However, these same matches between social media and adolescents’ evolved tendencies also present potential for maladaptations.

Mismatches Between Social Media and Children’s and Adolescents’ Lives

The ethologist and Nobel Laurate Nikolaas Tinbergen noted how certain stimuli could automatically elicit stereotypic and survival-related behaviors in a variety of animals. For example, male stickleback fish will engage in aggressive behavior in response to the red spot on the underside of another male fish or to inanimate objects with a red lower half, and herring gull chicks will make a begging response to the markings on the head (white head, and yellow bill with a red spot) of an adult herring gull or to similar artificial stimuli. What is especially interesting in the latter case is that herring gull chicks respond even more strongly to a red knitting needle with three white bands painted on it than to a real herring gull face (see Figure 7.2). Tinbergen referred to an artificial stimulus that produces an exaggerated response relative to the natural stimulus as a supernormal stimulus, or a superstimulus.27 Supernormal stimuli have been found for a variety of behaviors and for many different animals, and they are consistent with the idea that natural selection shaped an animal’s responsiveness to perceptual features associated with a natural stimulus rather than to the natural stimulus itself. For example, herring gull chicks evolved not to recognize and make begging responses to the head of adult herring gulls, per se, but to make begging responses to perceptual features associated with adult herring gull faces. Thus, artificial stimuli that exaggerate those features can have an even stronger effect on behavior than the natural stimulus itself (see Figure 7.2). Humans’ preference for junk food is a case in point. Our ancestors evolved to prefer foods that tasted sweet (p.236) or salty, with these tastes being honest signs of a nutritious meal. Today, junk food has higher sugar and salt content than is good for us, but we find them even more appealing, not because we evolved to love Ben & Jerry’s Chunky Monkey ice cream, chocolate cream-filled doughnuts, or sea-salt-and-vinegar potato chips, but because these foods tickle our taste buds to an exaggerated degree. The same argument can be made for social media, especially for adolescents, who have evolved to be particularly sensitive to social cues of acceptance and rejection. Just as it’s difficult for many of us to eat ice cream or potato chips in moderation (“I bet you can’t each just one!”), so is it difficult for adolescents to limit their use of social media.

Evolutionary Mismatches in the Development of Today’s Children

Figure 7.2. An adult herring gull’s head, an artificial stimulus with a red dot, and supernormal stimulus that elicits an exaggerated begging response from herring gull chicks.

Source: With permission from Chelsea Schuyler, The Chelsea Scrolls, “Why Seagulls Have That Weird Red Spot: Push the Red Button.” https://thechelseascrolls.com/2018/01/23/advice-of-the-seagull-push-the-red-button/ (Retrieved August 23, 2019).

Unlike flesh-and-blood people, social media is always available to you. Adolescents can check it first thing in the morning and take it to bed with them at night. They can send and receive texts anytime during the day, create content (“Here’s my pumpkin bagel with peanut butter and cream cheese”), view other people’s posts, and count the likes and views they have on their own postings. They can post photos of themselves doing interesting things, or doing nothing at all. Because of cellphone cameras, people can take (p.237) numerous selfies, run the photos through filters, and display the most flattering (or funniest) images of themselves. And they can check the postings of celebrities, seeing what wealthy, famous, or socially connected people are wearing or promoting, or how they are otherwise spending their time.

Mismatches of Social Media with Social Relations. But just as foods high in sugar and salt can be simultaneously attractive and maladaptive, so can social media. Social psychologist David Sbarra and his colleagues make this explicit, proposing an evolutionary mismatch between smartphone use and close relationships. Ancient humans lived in small groups of hunter-gatherers and evolved sensitivity to social cues and processes, including intimacy, necessary for forming and maintaining close relationships. Ready access to social media via smartphones may interfere with such relationships, in large part by causing people to be less attentive during in-person interactions.28 For example, in a study of 143 married or cohabitating women, 70% reported that smartphone use interfered with face-to-face interactions with their partners. Sixty-two percent of the women said that their partner’s attention to his phone or tablet during the couple’s leisure time occurred at least once a day.29 In other research by social psychologist Kostadin Kushlev and his colleagues, use of smartphones was found to distract people and to reduce enjoyment during social engagements, such as studying or eating a meal together, to result in reduced friendliness toward strangers, and in reduced likelihood of making casual contact with people during a wayfinding task.30

Constant use of social media not only serves to distract people when interacting with others, but for many adolescents it is also actually replacing in-person interaction. For example, although face-to-face interaction with friends is still popular, recent surveys find that many teens prefer texting than in-person interaction.31 Although the benefits of smartphones are many in terms of social connectiveness, entertainment, and communication, their downside, especially for many teens and young adults, is that they can distract from and reduce the pleasure of ongoing face-to-face contacts, as well as replace to a large extent such in-person interactions.

Other downsides to the ubiquity of smartphones is sleep deprivation and its psychological consequences, and a reduction of exercise and the subsequent health effects. Regarding sleep, smartphones and tablets do not sit on a desk but are mobile, and a number of studies have found negative consequences for teenagers who have access to social media in their bedrooms. Adolescents who take their smartphones or tablets to bed with them report fewer hours of sleep, spend more time looking at screens, read (p.238) less, have poorer school performance, have a greater tendency toward obesity, and report lower psychological well-being than adolescents who do not have bedroom media.32

With respect to exercise and Internet use, a number of studies have reported that heavy Internet use is associated with children and adolescents getting less exercise and being at an increased risk for obesity.33 Relatedly, children and adolescents are also outdoors less often, which means they are spending less time in nature. Author Richard Louv calls this situation nature-deficit disorder34 and points out that it has been increasing over the past few decades. Biologist and naturalist E. O Wilson coined the term biophilia to refer to people’s love of and fascination with the biological world, which seems to be especially strong in childhood.35 A number of studies have reported the positive effects on cognition or psychological well-being of spending time in natural versus in synthetic environments.36 Louv contends that, in part because of parents’ overprotectiveness and in part because of social-media use, children and adolescents today spend less time outdoors, which is resulting in increases in attention disorders and feelings of depression. Other research has suggested that there is a more direct route between social media use and mental health, and it is to this topic we now turn.

Social Media and Mental Health. A number of surveys have documented that levels of depression, anxiety, loneliness, and suicide among teenagers have increased over the past decade or so. The increase is especially sharp for girls beginning in 2012, which Jean Twenge notes corresponds with members of iGen—the first generation of native smartphone users—entering adolescence. These changes in mental health are correlated with an increase in smartphone adoption, but are smartphones and the access to the social media they provide the cause for the secular change in adolescent well-being? Many think so, viewing excessive use of social media as an addiction with the mental (and sometimes physical) consequences that addiction entails.

Although social media addiction has not been recognized as a disorder by the World Health Organization or the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), a quick Google search brings up hundreds of references to the topic, with the first scholarly paper on Internet addiction being published in 1998.37 Since then a number of researchers have documented associations between excessive use of social media (some would say addictive) and poor mental health, especially in adolescents and young adults, the most frequent users of social media.38 For example, Jean Twenge (p.239) and W. Keith Campbell surveyed a large sample of American children in 2016, examining the relation between amount of daily screen time and lifetime diagnoses of depression and anxiety, controlling for socioeconomic level, grade level, race/ethnicity, and sex (male of female). The results for 14- to 17-year-old children are shown in Figure 7.3. As you can see, adolescents who were “online” 1 to 4 hours a day were less likely to be diagnosed for depression or anxiety than teens with no screen time, illustrating a seeming benefit for youth who use social media versus those who have no access to social media or choose not to use it. The relation was reversed, however, for higher levels of screen time. Adolescents who were online an average of 7 hours a day were about three times as likely to be diagnosed with depression or anxiety than teens who spent just 1 hour a day online.39

Evolutionary Mismatches in the Development of Today’s Children

Figure 7.3. Percentage of 14- to 17-year-olds with lifetime diagnoses of anxiety and depression as a function of their hours of screen use, controlling for socioeconomic level, grade level, race/ethnicity, and sex (male of female).

Source: Data from Twenge & Campbell, 2018; figure from Twenge, 2019, p. 374, with permission.

Other research has shown relations between amount of screen time and measures of psychological well-being, including ratings of happiness, (p.240) aggression, social relations with significant others, and suicide and suicide-related cognitions.40 For example, Jean Twenge, Gabrielle Martin, and W. Keith Campbell used data from a large national survey of adolescents to examine the relation between ratings of happiness and a variety of screen and non-screen behaviors for teenagers interviewed between 2013 and 2016. Although correlations in general were low, they were statistically significant and positive for engaging in non-screen activities such as sports or exercise, in-person social interaction, attending religious services, and reading print media (for example, the more one exercised, the happier one reported being); they were statistically significant and negative for engaging in screen activities such as video chats, social media, texting, playing video games, and Internet use in general. That is, the more time teens spent on social media, the less happy they reported being.41 In other research, the more time adolescents spent on social media and text messaging during a day, the greater were their symptoms of attention deficit hyperactivity disorder (ADHD) and conduct disorder.42 In a six-year longitudinal study examining texting starting at age 13, adolescents classified as “perpetual” users displayed higher levels of depression, anxiety, and aggression, and poorer relations with their fathers than adolescents who texted less often.43

Although most studies examining teenagers’ social media use and measures of psychological well-being report negative findings, others find only small relations44 or question the direction of causality. For example, psychologist Taylor Heffer and her colleagues followed a group of nearly 600 12-year-olds over two years and a group of undergraduate students over a six-year period, measuring social media use and depression. They reported that social media use did not predict later depression but, rather, that levels of depression predicted subsequent social media use among adolescent girls. That is, teenage girls showing signs of depression were more likely to subsequently use higher amounts of social media.45 Other studies report opposite results, however. For example, one study of college students reported that use of Facebook predicted future increased feelings of unhappiness, but that unhappiness did not predict subsequent increases in use of Facebook.46

The best way to determine cause and effect is to conduct experiments in which different levels of social media use are determined by researchers, and the effects on well-being measured. Such experiments are rare but several have been done. For example, a 2016 Danish study reported improved self-ratings of life satisfaction and emotional well-being for Facebook users who volunteered to stay off the platform for a week relative to people who (p.241) used Facebook as usual (average age 34 years, 1,095 participants in total).47 Another study instructed Facebook users either to passively scroll through Facebook or to actively post comments.48 In both a laboratory and field (naturalistic) study, the authors reported that the passive-scrolling group reported lower levels of well-being and higher levels of envy than the more active users, indicating that how one uses Facebook has consequences for well-being. In the most recent experimental study, clinical psychologist Melissa Hunt and her colleagues assessed college students’ baseline levels (time per day) on three social media platforms (Facebook, Snapchat, and Instagram), as well as measures of loneliness, depression, anxiety, and fear of missing out (FOMO).49 One group of students was then asked to limit their use on each platform to 10 minutes a day for the next three weeks, whereas the control group was simply asked to keep track of how much time they spent on each platform. The group that limited its use to 10 minutes per platform showed reductions in loneliness and depression relative to baseline, and both groups showed reductions in anxiety and FOMO. The authors proposed that their results provided strong evidence that excessive social media use is detrimental to psychological well-being. They interpreted the reduction in anxiety and FOMO of the control group as a result of increased self-monitoring of social media use.

A large majority of teenagers and young adults in developed countries use social media daily, and most research indicates that too much use has negative consequences for their mental health. Social media push all the right buttons in the adolescent brain, which has evolved to be sensitive to social cues of acceptance and rejection. Facebook and Snapchat almost demand social comparison, and because everyone puts his or her best face forward on these platforms, it appears that everyone has a better life than you do. Support for this interpretation comes from psychologists Mai-Ly Steers, Robert Wickham, and Linda Acitelli, who asked college students to keep a diary over a two-week period of Internet use and to answer online questionnaires assessing social comparisons and depressive symptomology. They reported that feelings of depression were mediated by social comparison: college students felt depressed after spending time on Facebook because they felt bad when comparing themselves with others.50

The qualities of social media both match and mismatch adolescents’ evolved adaptations. Some social media use seems to be beneficial for teenagers’ (p.242) psychological well-being, and how can it not, as “everyone is doing it,” and this is how adolescents keep in touch with their peers today? But too much use is associated with reduced face-to-face interaction, less sleep and exercise, and feelings of anxiety, loneliness, unhappiness, and depression. The effects are sometimes small and not experienced by everyone, and the direction of causality is sometimes in question: does being on social media cause depression, or do feelings of depression cause adolescents to spend more time on social media? My guess is that the relation is bidirectional and cyclical. Social media is the fast food of the Internet. Like fast food, social media platforms are supernormal stimuli, signaling experiences that our ancestors found valuable and thus worthy of our attention. We’re very much attracted to them and a little bit won’t hurt us. It’s overconsumption that can be maladaptive.


Author Sebastian Junger starts his 2016 book, Tribe, with a look at colonial America and relations between the newly arrived Europeans and the native Indians. It was not uncommon for Europeans, sometimes as a result of kidnapping or the spoils of war, to live among Indians, and likewise for Indians to live among Europeans. When opportunities arose for people to be reunited with their birth communities, Indians—often reared from childhood in European-style societies—quickly reverted to a native lifestyle. The reverse rarely happened. Europeans who had experienced life in an Indian community wanted to stay, and, when forcefully reunited with their kin, often escaped back to their adopted tribe. This was perplexing to the Euro-Americans, who viewed the Indians as savages and themselves as the epitome of civilized people. When given the choice, why would anyone choose the primitive and difficult life of an Indian over the more comfortable and civilized life of Europeans?51

It’s not that the Indians were nicer. They were frequently at war among themselves, as well as with the settlers. They had slaves and could be brutal to their enemies. So could the Europeans, of course, but Junger argues that it wasn’t any enhanced kindness of Indian lifestyle that appealed to people, but the greater sense of community. The greater material wealth of Western civilization had its appeal, but Indian life was intensely communal, with people embedded in a complex web of relationships and dependencies. Tribal life was more similar to the conditions in which our ancestors evolved. In (p.243) Chapter 6 I described humans as a hypersocial species, and deviations from a communal to a more individualistic lifestyle as found in many contemporary societies produces a mismatch. Although this mismatch likely affects people of all ages, it may be particularly large for teenagers from highly individualistic countries such as the United States.

New York Times columnist David Brooks is the latest in a long line of social commentators and scholars (for example, Gail Sheehy, Passages; Robert Putnam, Bowling Alone) who have noted American’s march toward increasing individualism—hyper-individualism in Brooks’s term—over the past 50 years.52 Hyper-individualism, which typifies other WEIRD countries such as Great Britain and Australia, emphasizes personal freedom (“I’m free to be myself”) as well as personal achievement. It has resulted in many positive outcomes over the last half-century, including reductions in sexism, racism, and homophobia, as well as producing remarkable economic accomplishments exemplified by Silicon Valley. However, hyper-individualism makes it more difficult to lead a bonded, communal life, and Brooks notes the decline in community involvement, stable marriages, trust in societal institutions, and close friendships in America over this same time period with a corresponding increase in loneliness, suicide, and depression, with the young being especially affected.

The connection between social support and depression has long been noted. In the late 1800s, the French sociologist Emile Durkheim showed that people embedded in social relations had lower suicide rates than people who were socially isolated.53 At a global level, people living in collectivist cultures have lower rates of depression and other negative emotions than people living in more individualistic cultures.54 One possible explanation for this is that people from individualistic cultures are genetically disposed to become depressed relative to people in collectivist cultures. For example, behavioral geneticists have identified an allele of a gene responsible for the processing of the neurotransmitter serotonin that is associated with a predisposition to negative emotions such as anxiety and depression. Neuroscientists Joan Chiao and Katherine Blizinsky found across a sample of 29 nations that people from collectivists cultures, including Japan, Singapore, and China, were more, not less, likely to have the specific allele associated with negative emotions, despite the fact that they were less apt to suffer from mood disorders. The authors attributed the lower levels of depression for people in collectivists countries to greater collectivist cultural values.55

(p.244) Jean Twenge, in her 2017 book iGen, noted that iGen’ers, perhaps even more so than previous generations, are believers in individualism. Their delay in becoming adults affords them more time to develop an integrated sense of self. They believe ardently in the belief that people should be free to make their own decisions—to have choices—and thus, they are less likely than previous generations to value community involvement or to accept traditional social roles and rules. Adolescents’ sense of individualism is associated with their online behavior. According to Twenge,

teens who spend more time online on social media are more likely to value individualistic attitudes and less likely to value community involvement . . . The good news is that they are supportive of equality of race and gender, one of the primary outcomes of individualism. But they are also less civically engaged and feel more entitled to things even if they don’t work for them.56

So what’s the mismatch, and especially for adolescents? As we discussed earlier, teenagers and young adults are highly focused on social relations, and, as we saw in Chapter 6, even preschool children are aware of and respond differently to in-group than to out-group members. Children’s and adolescents’ evolved tendencies to belong and identify with groups is in conflict with some of the values of hyper-individualism that characterize WEIRD cultures. This conflict can result in adolescents and young adults being reluctant to commit to romantic relationships and, because they eschew conventional community groups, when experiencing loneliness and a lack of belonging, they may be vulnerable to the lure of gangs or cults that provide the communal relationships humans have evolved to expect.

Adolescent Mismatches

Adolescence is both a biological and cultural phenomenon. It is also an evolved feature of our species. Many cultures have rites of passage sometime during the teen years, marking the transition from childhood to adulthood. Yet, few 13- or 14-year-olds in any culture are viewed as true adults after completing their rituals, and in most WEIRD cultures “adulting” for many is postponed well into the third decade of life. This is consistent with developmental psychologist Jeffrey Arnett’s concept of emerging adulthood, (p.245) describing people roughly between 18 and 25 years of age. Arnett coined the term in 2000 to describe a stage in life of many young people in WEIRD societies that is neither adolescence nor young adulthood, distinguished by independence from traditional social roles and normative expectations (for example, marriage, beginning a career). Emerging adults are able to explore their options for the future with respect to relationships, work, and worldviews.57

Despite cultural differences, the biology of adolescence is similar worldwide. The brains of teenagers and young adults emphasize risk, sensation seeking, social relations, and mating, while the parts of the brain responsible for regulation of behavior lag behind. In ancient and many modern environments, these features were adaptive, helping young people move away from their parents and form identifies and relations of their own. The sensation seeking and quest for social belonging may have been adaptive to young men in hunter-gatherer and other traditional cultures, making them valued warriors and hunters, as well as making them attractive to young women, who were experiencing their own quest for independence. Adolescents today have the same brains as their ancient ancestors at that age, but environmental conditions are greatly different. As a result, children grow up more slowly than in generations past—that is, they follow a very slow life history course—and this affects, sometimes positively and sometimes negatively, the adults they will become. The relative lack of community and social bonding characteristic of individualistic societies is associated with increases in loneliness and mental health problems, and this may be especially potent for young people and can sometimes be exaggerated by social media. The genies will not soon be put back into their digital and individualism bottles, but an understanding of adolescents’ evolved features—adaptations shaped for dealing with very different environments—may help us better construct conditions for them and help them deal with problems when mismatches occur.

How Young Children Learn and the Mismatch with Formal Educational Practices

We, more than any other animal, live by our wits and acquired knowledge. Our ability to learn and to solve novel problems has permitted our kind to inhabit nearly all ecosystems of the earth, from the steamy tropics to the frigid Arctic; to adapt to any one of thousands of cultures; to invent writing, (p.246) numerical, and musical systems; and to develop and use technologies from stone axes to smartphones. Humans possess and retain well into adulthood a high degree of neural, cognitive, and behavioral plasticity that affords such learning, although we are most plastic early in development, and learning in infancy and childhood sets the stage for future education.

Homo Sapiens Is the Most Educable of Animals

I use the term education to refer to the acquisition of abilities, values, beliefs, and knowledge valued by one’s culture. Education is the process of becoming a functional member of one’s society—of learning the ways of one’s social group, as well as acquiring the technological skills of one’s society. Education is not a modern phenomenon but is a core part of our species’ natural history. Throughout that history, crucial skills and knowledge were acquired “in context” or “on the job,” so to speak, while observing or interacting with other people or with things, many of which were cultural artifacts. Much of what was learned was of immediate survival value (for example, what is good to eat and what should be avoided, how to gather tubers), and others were related to established group practices (for example, how to greet one another, how to worship deities). Long after our ancestors left their hunter-gatherer lifestyles and settled down in agrarian communities, most important knowledge and skills continued to be learned in such a hands-on manner. It was not until centuries after the invention of writing that literacy and numeracy became important skills for a vast number of our species.

Humans may not have evolved to read, maneuver automobiles on busy highways, or live in metropolises, but we have learned to do so. To accomplish such evolutionarily novel feats has required major changes in how culture is inculcated in children and transmitted across generations—chiefly through the invention of formal schooling. Literacy and numeracy date back only about 6,000 years, and for most of this time, only elite members of society (priests, members of the ruling class, some merchants) were literate. With the invention of the printing press in the late 1400s, literacy became important for religion (the Protestant Reformation in Europe emphasizing that people should receive the word of God directly, not through priests) and politics (the indoctrination of its members by the state to promote nationalism and create patriots).58 Subsequently, as a nation’s economy became more dependent on literacy and numeracy, formal education became the norm in (p.247) nearly all countries, such that today people in most developed nations believe that having an educated populace is the backbone of a successful society; around the world, nations vie to develop curricula that will produce intelligent and productive citizens.

These changes in the demands of culture required changes in how children acquired that culture. Critically, in most cultures today, the most basic technological skills, including reading, writing, and numeracy, are context independent (that is, children learn them in contexts independent of any immediate use), and they are typically taught to children by unfamiliar adults in unfamiliar settings. The result is that much of formal education is an evolutionary novelty. This is in contrast to the way that our forechildren learned, assuming that hunter-gatherer practices are an indication of how our ancestors lived and passed on information from one generation to the next. From an early age, hunter-gatherer children spend most of their day playing in mixed-aged groups, mostly unsupervised by adults. Children acquire most practical skills from their older peers and occasionally learn important skills by watching and interacting with their parents or other adults. Modern hunter-gatherer adults seldom directly instruct children in any skill, and it is likely that this was also true for our ancestors.59

Although what constitutes teaching can be subjective, with different academic disciplines having slightly different definitions,60 there is general agreement that formal instruction, as typifies schooling in WEIRD societies, is rare in hunter-gatherer and traditional societies; rather, according to anthropologists David Lancy and M. Annette Grove, in hunter-gatherer cultures, the “entire community and its surroundings are seen as the ‘classroom,’ and the ‘curriculum’ is displayed as an ‘open book.’ ”61 To this point, a number of educators and scholars have proposed that educational environments that depart substantially from ancestral environments can create unintended consequences, and they suggest that modern educational practices can be made more effective by considering the evolutionary history of children and learning.62

As I mentioned earlier, one feature of ancient educational environments was learning “in context” from more knowledge people (adults and peers). This was central to Russian psychologist Lev Vygotsky’s sociocultural theory and the concept of the zone of proximal development, defined as the difference between a child’s “actual developmental level as determined by independent problem solving” and his or her level of “potential development as determined through problem solving under adult guidance or in collaboration (p.248) with more capable peers.”63 Children become competent at a skill when they engage in collaboration with other people, such that a more-experienced individual can scaffold the performance of a less-experienced individual.64 Scaffolding happens when the more-skilled person is sensitive to the abilities of a novice and responds contingently to the novice’s responses in a learning situation. As a result, the novice gradually increases his or her understanding of a problem. Scaffolding is most effective when done within the zone of proximal development. Although scaffolding occurs quite naturally in the home and on the playground, it is more difficult to achieve in the classroom.65 Strict adherence to a standardized curriculum and classroom size usually exceeding 20 children limit teachers’ abilities to assess individual children’s current skills and appropriately scaffold their learning. Another problem is that when students are arranged in same-age and same-skill groups, they miss the opportunity to learn from other more-skilled children and to serve as teachers to less-skilled children.

Learning Through Watching and Playing

Although formal instruction in a classroom may be the primary way in which most children in the world today are educated, in the distant past children mostly educated themselves via social learning and play in the context of mixed-aged peer groups.

Learning Through Watching. Children evolved remarkable abilities to acquire the skills and knowledge necessary to survive in hunter-gatherer communities through observation, without the need for formal instruction. I reviewed children’s development of social-learning skills in Chapter 6, as well as how they differ from those of our simian relatives. As you may recall, one important difference between the social learning of apes and children is that children, beginning reliably around 3 years of age, but not apes, will engage in overimitation, copying even unnecessary actions displayed by a model. Although this sometimes results in a less efficient way to use a tool, for example, it more often results in children acquiring culturally important information (both technological, as in tool use, and symbolic, such as ritualistic behavior). Relatedly, Hungarian developmental psychologists György Gergely and Gergely Csibra proposed that overimitation is an adaptation that permits fast and accurate transmission of information between individuals, which they refer to as natural pedagogy.66

(p.249) Learning Through Playing and the Mismatch with Contemporary Educational Practices. Most of the social learning of hunter-gatherer children is achieved in the context of play, defined as a “type of exploratory learning in which the young animal engages in a variety of behaviors in a low-risk, low-cost context.”67 In Chapter 5 I looked at the importance of play in child development as well as in the evolution of Homo sapiens. It is primarily through play that children acquire tool-using skills and, most critically, the people skills necessary for navigating the social environment. As educational psychologist Anthony Pellegrini and I wrote, “play seems to have been especially adapted for the period of childhood and is what children are ‘intended’ to do.”68 Children still learn through play today, although modern school curricula tend to minimize the role of play in learning, replacing free play with formal instruction. To some extent this is necessary. It is rare that children learn to read or do long division solely by playing. Instruction is usually necessary. But a curriculum without play is deviating substantially from the way children have evolved to learn, and incorporating play into formal education may have substantial benefits.

As I noted earlier in this chapter, the amount time American children engage in free play has been declining steadily over the past five or six decades, being replaced by adult-supervised play at home and formal instruction in school. Psychologist Peter Gray argues that the increase seen in mental health problems of children and teens over this same time period is due, in part, to children’s loss of freedom to choose the activities they partake in (that is, the loss of free play). Gray also argues that the near absence of play in contemporary schools has negative consequences for learning.69 I am sympathetic to this viewpoint, but what evidence do we have that play actually supports learning, as opposed to being merely a distraction from “real” education?

A number of studies have found beneficial effects of play on various aspects of children’s cognition. In one study, the amount of time 3-year-olds spent talking with peers during fantasy play was positively associated with the size of their vocabularies at age 5;70 other studies found that the more children engaged in spontaneous sociodramatic play, the better they were at remembering and comprehending stories.71

Perhaps the area of research that has garnered the most attention recently is the relation between play and children’s executive function. Executive function refers to processes involved in regulating one’s attention and behavior and is critical in planning and behaving flexibly. Executive function consists of a related set of basic information-processing abilities, including working (p.250) memory (or updating), involved in storing and manipulating information; inhibiting responding and resisting interference; and cognitive flexibility, as reflected by how easily individuals can switch between different sets of rules or different tasks.72 Scientists sometimes refer to “cold” executive functions used to regulate basic cognition related to learning, and “hot” executive functions used to regulate emotions.73 We’ve encountered aspects of executive function earlier in this book, for example, when discussing the evolution of inhibition being necessary for self-regulation and enhanced sociality in Chapter 4. Research over the last decade has clearly shown that both developmental and individual differences in executive function in childhood are predictive of IQ, academic performance, emotional competencies, and social cognition and behavior in adolescence and adulthood.74 And other research suggests that individual and developmental differences in executive function are influenced by children’s fantasy play.

The connection between play and executive function actually predates the modern era and can be found in the work of Vygotsky. Vygotsky argued that during fantasy play children use language to regulate their behavior—to stay in character—and this, in turn, enhances self-control. According to Vygotsky, “in play, the child always behaves above his average age, above his daily behavior; in play it is as though he were a head taller than himself.”75 More contemporary research seems to back Vygotsky’s contention. For example, in one study, parents reported on 6- to 7-year-old children’s daily activities, which neuroscientist Jane Barker and her colleagues classified as more- or less-structured. Tests of children’s executive function were then correlated with how children spent their leisure time. The researchers reported that the more time children spent in less-structured activities (such as free play), the greater their executive function tended to be.76 Other research has reported: significant relations between measures of executive function and pretending in preschool children,77 that impulsive preschoolers who engaged in a high frequency of sociodramatic play showed enhancement in self-regulatory behaviors over the course of the school year,78 and programs that encouraged children to use self-regulatory speech and to engage in dramatic play produced improvements in children’s executive-function abilities.79

Why should pretend play and executive function be related? Developmental psychologists Clancy Blair and Adele Diamond write: “During social pretend play, children must hold their own role and those of others in mind (working memory), inhibit acting out of character (inhibitory control), and flexibly adjust to twists and turns in the evolving plot (mental flexibility); all three (p.251) of the core executive functions thus get exercise.”80 Although most of these studies are correlational, making it difficult to infer that fantasy play causes improvements in executive function, a number of researchers have argued that the relation is bidirectional, so that not only does executive functioning play a role in children’s ability to immerse themselves in pretend play, but pretend play, in turn, also facilitates the development of executive functions.81

Free, unstructured, fantasy play has been touted as having many benefits for child development, including cognitive development of children growing up in WEIRD societies. Play was how our forechildren spent most of their time and learned the ways of their culture in the process. Today’s children spend much less time in pretend play, both in and out of school, in part because of recent changes in media-centered activities and school curricula that emphasize more formal learning, even during the preschool years. The result is a departure from developmentally (and species-) appropriate activities, which may have consequences not only for children’s learning, but also for their emotional development.82 According to Peter Gray,

We are pushing the limits of children’s adaptability. We have pushed children into an abnormal environment, where they are expected to spend ever greater portions of their day under adult direction, sitting at desks, listening to and reading about things that don’t interest them, and answering questions that are not their own and are not, to them, real questions. We leave them ever less time and freedom to play, explore, and pursue their own interests.83

Incorporating Play into Preschool Curricula. It was not long ago that preschool and kindergarten children spent most of their schooldays developing social and intellectual skills mainly by playing with other children, with the rigors of learning to read or to calculate being left to first grade. This is still true in many parts of the world, but many preschools, particularly in the United States, emphasize using direct instruction, more typical of techniques used in elementary school, rather than more developmentally appropriate approaches that take children’s “natural” propensities for play and activity into consideration. Do children perform any better (or worse) in developmentally appropriate programs than in direct-instructional programs? Results of research contrasting these two types of programs have provided mixed results: Concerning academic abilities following a year in these programs, some studies report better performance (p.252) for children attending direct-instruction programs, some for developmentally appropriate programs, and others find no differences. When looking at longer-term effects (that is, greater than one year), a majority of studies find more benefits for developmentally appropriate programs. Results are more consistent when motivational and psychosocial factors are considered, with most studies reporting that children who attend developmentally appropriate programs experience less stress, like school better, are more creative, and have less test anxiety than children attending direct-instructional programs.84 Although most of the differences reported in these studies are small in magnitude, the research indicates that there are no long-term benefits of academically oriented preschool programs for middle-class children, and some evidence that such programs might actually be detrimental. This caused educational researcher Marion Hyson and her colleagues to conclude that there seems to be no defensible reason for encouraging formal academic instruction during the preschool years. Hyson and her colleagues write: “it may be developmentally prudent to let children explore the world at their own pace rather than to impose our adult timetables and anxieties on them.”85

All but the most unstructured preschool programs will involve some direct instruction, or teaching, of course. Whether children learn more effectively through teaching or play will depend on what is being learned. Developmental psychologist Elizabeth Bonawitz and her colleagues proposed that teaching is beneficial when children need to learn specific skills or information, but, on the downside, it also limits the range of hypotheses that children are able to consider. Bonawitz and colleagues contrasted direct instruction with discovery learning, or learning through play. The researchers either taught children a specific set of behaviors with a novel apparatus (pushing a button to make a squeaking sound) or just let children explore the apparatus freely without giving them any specific instructions on what the apparatus does. Children taught how to make the apparatus make a sound spent more time with the squeaker but spent less time overall playing with the apparatus and discovered fewer of the other things that the apparatus could do than children given no specific instructions. Thus, discovery learning may facilitate children learning new properties of items or events, although it may slow the learning of specific skills or information. Young children’s social-learning abilities permit them to learn new skills rapidly through direct teaching, but direct teaching tends to limit exploration and the discovery of novel properties of artifacts. As Bonawitz and her colleagues (p.253) write, “the decision about how to balance direct instruction and discovery learning largely depends on the lesson to be learned.”86

One technique that takes advantage of young children’s evolved social-learning skills and playful approach to life is guided play, which is a compromise, of sorts, between free play and direct instruction. Like free play, guided play involves the autonomy and discovery of free play in addition to the adult guidance of direct instruction. Developmental psychologist Deena Skolnick Weisberg and her colleagues define guided play as “learning experiences that combine the child-directed nature of free play with the focus on learning outcomes and adult mentorship.”87

One form of guided play involves adults constructing situations to emphasize specific learning goals, making sure children have the freedom to explore freely within that setting. Several studies have set up exhibits in science museums, constructed to increase the chance that children will discover certain facts or principles while playing with the materials (that is, discovery learning), and have reported that children, indeed, learn from these exhibits and can transfer their newly acquired knowledge to new learning situations.88

A second form of guided play involves adults watching children play and making comments or encouraging them to learn more in the setting. For example, in one study, 4- and 5-year-old children were shown an array of geometric shapes (triangles, rectangles, pentagons, and hexagons) in one of three conditions: Free-Play, in which children could play with the materials in any way they wished; Didactic Instruction, in which an adult described each of the shapes and explored the shapes (discovering the shapes’ “secret,” for instance, that triangles have three sides) while the child watched; and Guided-Play, in which an adult described the shapes in the same way as in the Didactic Instruction condition but encouraged the child to explore and discover the shapes’ secret. When later asked to sort shapes (for example, sort the triangles together), children in the Guided-Play condition performed best, and children the Free-Play condition performed worst, with performance of children in the Didactic-Instruction in between the two.89

Why might guided play work better than direct instruction or free play for many contents? The active nature of guided play and the act of discovery are intrinsically rewarding to children and may help foster children’s love of learning and persistence on a task. The social-interactive nature of the task is also reinforcing for children, and, as we’ve seen earlier in this section and in Chapter 6, children evolved highly efficient social-learning abilities, (p.254) increasing the chance that they will learn from instruction.90 Guided play also takes advantage of working within Vygotsky’s zone of proximal development, which, according to theory and research, is the level of difficulty in which most improvements in skills occur.

Most educators and theorists recognize the need for direct instruction for higher-level concepts and skills, such as mathematics,91 which would seemingly make free or guided play less useful at older ages. However, a meta-analysis of studies examining the effectiveness of different pedagogical methods in children, adolescents, and adults reported better outcomes, on average, of enhanced discovery learning (much like guided play) when compared with other forms of instruction for people of all ages.92 Similarly, the theory behind a Montessori curriculum has much in common with guided play (children are free to engage in a carefully prepared environment), and research has found that children attending Montessori programs frequently outperform children attending conventional schools in terms of academic achievement.93

Peter Gray, a proponent for educational settings mimicking as much as possible hunter-gatherer conditions, argues that a free-play orientation should be implemented at all levels of education, not just for preschoolers. If children want to learn to read or do calculus they can do so by watching others or ask for instruction if some is needed. Gray believes that each child is solely responsible for his or her own education, and as evidence he describes a successful preschool-to-high-school program that follows this philosophy (Sudbury Valley School in Framingham, Massachusetts).94 Gray’s perspective is consistent with Jean Piaget’s view of the role of teachers in modern education, who wrote:

It is despite adult authority, and not because of it, that the child learns. And also it is to the extent that the intelligent teacher has known to efface him or herself, to become an equal and not a superior, to discuss and examine, rather than to agree and constrain morally, that the traditional school has been able to render service.95

Both Gray’s and Piaget’s views on education are out of the mainstream, and many scholars, although touting the benefits of discovery learning in some contexts, argue that what children need to learn today and what our ancestors needed to learn to succeed are so different that eschewing direct instruction is unwise. For example, evolutionary developmental psychologist (p.255) David Geary makes a distinction between biologically primary and biologically secondary abilities. Biologically primary abilities were selected over the course of evolution, such as language and a rudimentary sense of quantities. In contrast, biologically secondary abilities are built upon primary abilities but are cultural inventions, such as reading and mathematics beyond simple addition. Many biologically secondary abilities that people must master today are vastly different from what our ancestors needed to master and increasingly discrepant from the biologically primary abilities from which they derive. To this point Geary writes,

The gist is that the cognitive and motivational complexities of the processes involved in the generation of secondary knowledge and the ever widening gap between this knowledge and folk knowledge leads me to conclude that most children will not be sufficiently motivated nor cognitively able to learn all of secondary knowledge needed for functioning in modern societies without well organized, explicit and direct teacher instruction [italics in the original].96

Although much of modern schooling reflects a mismatch from the way our ancestors became educated, modern environments are also highly different from those of our forechildren, making it necessary to modify ancient forms of pedagogy. I believe that evolutionary theory tells us that children usually learn best when they are motivated to explore their surroundings and free to discover new knowledge. However, humans, especially children, have also evolved a high degree of plasticity and the ability to learn from instruction, and these abilities need also be considered when applying evolutionary ideas to education.

Visual Media for Infants and Toddlers: A Mismatch with How Young Children Learn

Infants and toddlers don’t have smartphones (at least, most don’t), but they nonetheless have extensive exposure to screens. A 2017 surveys of American parents found that children 2 years of age and younger had, on average, 42 minutes of screen time per day, most of it (29 minutes) from television.97 The rest of this time (13 minutes per day) was with DVDs, tablets, or smartphones, presumably watching videos. And a 2015 study sampling 350 U.S. children (p.256) 6 months to 4 years of age who visited a pediatric clinic in an urban, low-income minority community reported that 97% of the children used mobile devices, with most starting to use them before their first birthdays.98

Videos, whether presented on tablets, televisions, or smartphones, present two-dimensional representations of people and objects. Although ancient humans were known to make two-dimensional drawings on cave walls as far back as 40,000 years ago, most of what children encountered until relatively recently was 3D in nature, not 2D. Can infants and young children process two-dimensional information? Can they learn from 2D screens? And might exposure to 2D screens be a mismatch for the way children have evolved to make sense of their surroundings?

The answer to the first question is, yes, infants and toddlers can process 2D displays, but they seem not to treat them as symbolic representations of “real” objects but, rather, as worthy entities in their own rights, often attempting, for example, to pick pictures off the page of a book.99 To answer the second question, infants and toddlers can, indeed, learn from 2D representations, just not as readily as they can from 3D models. For example, 12- to 21-month-old infants who watched a televised model perform some novel actions on objects later imitated those actions significantly better than expected by chance; however, they required twice the exposure to perform as well as when they witnessed a live model perform the same activity.100 This finding is typical of much research with children 2 years of age and younger and is the answer to our third question: although infants and children can learn from pictures and videos, researchers report a video deficit, with performance based on viewing 2D representations consistently being worse than when real people or objects are involved. For instance, toddlers who watch a video of a model performing some novel behaviors remember about half as many actions as children who observe a live model.101 Developmental psychologist Rachel Barr notes that this deficit is not limited to videos but is found for other 2D displays as well, including touchscreens and picture books.102

Despite this well-documented deficit, parents wishing to enhance their infants’ cognitive development have bought educational software purporting to do just that. Perhaps not surprisingly, there is little evidence that digital media aimed at infants and toddlers has any benefit to children’s cognitive development, and several companies have been forced out of business for making false claims (Your Baby Can Read®), or offered refunds in response to questions about the legitimacy of their educational claims (Baby Einstein®). (p.257) For example, in one study, developmental psychologist Judy DeLoache and her colleagues taught 12- and 18-month-old infants new words by either watching a video without parental interaction, watching a video where parents were encouraged to interact with their children, or direct parent teaching. Only the infants who had direct parent teaching learned words better than infants in a control condition who had not had any instruction.103 Perhaps it should not be surprising that infants do not learn much content from digital media. Developmental psychologists Mary Courage and Alissa Setliff note that although infants are often highly attentive to videos, including television, it is not until about 18 months that the content of the video, rather than the physical stimulus qualities of the display, will hold children’s attention.104

Why might infants and toddlers not benefit from exposure to video and other 2D displays? Some have suggested that early exposure to visual media impairs executive function.105 For instance, in one study 3-year-old children who had been exposed to high levels of visual media in the home at 9 months of age displayed increased irritability, distractibility, failure to delay gratification, and difficulty shifting focus from one task to another, all reflections of poor self-regulation.106 However, the negative effect of too much screen time may be related to the type of programs children watch. Several studies have found impairments in executive function only for preschoolers who watch fast-paced cartoon shows,107 and that watching high-quality educational preschool programs has positive effects on academic performance.108

Humans evolved in environments with little if any two-dimensional stimuli. With the advent of painting, followed by printing, and now visual digital media, the 2D world is as “real” and as ubiquitous as everyday 3D stimuli. Human adults and older children seem to have little trouble dealing with 2D representations, but this seems not to be the case for infants and toddlers. The American Academy of Pediatrics recently reiterated its original 1999 recommendation that children under 2 years of age should have no digital-media time (excepting occasional video chats, presumably with grandparents; Figure 7.4), and that children between the ages of 2 and 5 years limit their digital-media time to one hour a day.109 The principal reason the American Academy of Pediatrics recommends limiting screen time for infants and toddlers is that the nervous systems of young children are immature and that they learn best from physical and social interaction. In other words, extensive exposure to visual media is a mismatch with young children’s evolved adaptations for learning and making sense of their world. (p.258) Evidence for this contention comes from a recent study in which the amount of time 3- to 5-year-old children spent looking at a screens per day was associated with the amount of white matter (myelin) in certain areas of the brain. White matter is associated with speed and efficiency of neural processing. The more time preschool children spent looking at screens, the less white matter they had in areas associated with language and emerging literacy skills.110 Although the research is far from conclusive at this point, what evidence we do have is consistent, once again, with the position that stimulation in excess of the species norm early in life can have negative consequences on subsequent development.111

Evolutionary Mismatches in the Development of Today’s Children

Figure 7.4. The American Academy of Pediatrics recommends that children under 2 years of age have no screen time, other than occasional FaceTime.

Source: Shutterstock, with permission.

Children have been educated (some would say they’ve been educating themselves) for millennia. They have evolved remarkable social-learning abilities, which, along with substantial neural plasticity, have permitted them to master the technological and social skills to function in any culture. Much of this education occurred during play with peers. Children today have the same impressive social-learning skills as their forechildren and still learn (p.259) through play, but the rapid changes in material and cognitive culture have put a strain on children’s evolved adaptations for learning. The fact that children can acquire skills such as reading and mathematics in settings that would be foreign to our ancestors attests to the substantial neural and cognitive plasticity characteristic of our species. Modern education takes advantage of children’s evolved learning abilities on the one hand, but presents a mismatch with their evolved adaptations on the other. Formal education and direct instruction seem necessary for children from WEIRD cultures to attain the skills and knowledge needed for success, and it is nearly impossible to escape the lure of digital media, even for infants and toddlers. Yet, the mismatches cause problems: some children lack the motivation to learn the modern curriculum, and the stress of formal schooling has consequences for social and emotional functioning. Being aware of the mismatches between how children have evolved to learn and how we teach them can help us develop curricula that provide children with the socially important skills they need while minimizing the problems such mismatches cause.

Dealing with Mismatches

As a species’ environment changes, adaptations that had evolved over generations in earlier, stable ecologies will sometimes be at odds with current conditions. Such mismatches could result in subsequent evolution, species extinction, or, if animals have enough plasticity, adjustments to the new contexts. Humans are the poster child for evolutionary mismatches. This is because Homo sapiens, more than any other animal, is a cultural species, and cultural changes occur at a much faster pace than biological changes. Humans’ neural, cognitive, and behavioral plasticity has permitted us to (1) create cultural change and transmit it across generations and (2) adjust our evolved adaptations to deal with cultural novelty. This doesn’t mean we make adjustments easily or without costs, but mismatches between evolved adaptations and cultural innovations have not meant extinction for us, at least not so far.

Evolutionary mismatches can have different consequences for people at different stages of the lifespan. Children are not simply incomplete or immature versions of adults. Natural selection has honed the nervous systems of infants, children, and adolescents to the particular niche in which they live and their ancestors evolved. The significance of this is that new material (p.260) or cognitive artifacts can be especially disruptive for younger members of the species, and this has consequences for development into adulthood. We are not about to turn back the cultural-innovation clock. For most of human existence we lived as hunter-gatherers, and children made their way in life the same way, with pretty much the same set of artifacts, as their parents and grandparents did. With the advent of agriculture and cities and a more sedentary way of life, new lifestyles were invented, but even then, technology changed slowly, and the lives of children continued to be much like those of their parents. Technological change accelerated over the past several hundred years and is moving at supersonic speed today.

What’s a Parent to Do?

What can parents do to minimize the problems caused by evolutionary mismatches? First, simply being mindful of the increasing mismatches between modern culture and the evolved adaptations of our species’ youngest members may help us make it easier for children to use the new technologies and also to be happier and less stressed as they do. Beginning in infancy, parents can follow the recommendations of the American Academy of Pediatrics and limit screen times of infants and toddlers. Given the prevalence of visual media today and young children’s fascination with video (they may not learn much from it, but they do pay attention to it), it is nearly impossible to follow the Academy’s recommendation for children under 2 of having no exposure to visual media and no more than one hour a day for children 2 to 5 years of age. Although it seems unlikely that occasionally watching videos on television or on a tablet will have dire consequences for children’s cognitive or social development, excessive exposure may.

Avoiding screen time is nearly impossible once children start school, as computers or tablets are a central part of the curriculum in many classrooms. Children in WEIRD cultures need to be computer literate. But this does not mean that they need to be glued to smartphones and active on social media. Many parents (including those who are Silicon Valley executives) have recognized the downside of having their children constantly online, and one parent group is advocating waiting until 8th grade before children have their own smartphones (Wait Until 8th).112 In place of spending their time on social media—interacting with age-mates virtually—parents can encourage their children to get exercise outside, playing face to face with (p.261) flesh-and-blood children. While outdoors, children can be encouraged to walk, bicycle, skateboard, hike, canoe, or kayak through natural areas, activities that have been shown to reduce stress and risk of mood disorders.113 And once children do have smartphones and tablets, they should not be allowed to take them to bed with them at night or use them at family mealtimes. Dinner-time conversations with teenagers can often be difficult, but they are almost impossible when one is staring at a smartphone. This goes for adults as well. This not only sets a good model for smartphone etiquette but also will enhance parent–child interactions, even with infants and toddlers.114

Children can get exercise in many ways, including adult-directed activities such as dance or martial-arts lessons, or playing in organized sports such as basketball or soccer. But they would be better off if some of this time were unstructured and unsupervised free play with other children. Children and teens do not need adults to make up rules for a game, determine when a rule has been broken, or adjudicate disputes. Generations of children have made up their own rules to games, have resolved arguments when they arise, and have learned much in the process. Given the emphasis on safety in many communities, it may be difficult to find times and places for children to congregate freely. Consider establishing a “free-play” space in your backyard or basement. If something comes up that children can’t handle, adult intervention is not far away, but children need the space and freedom to exert some control over their lives. This increase of freedom and choice for children and teens should foster the development of a sense of agency, becoming an actor in one’s life rather than a passive observer of it. Greater agency may result in some increased risk-taking, such as earning one’s own money or traveling alone by bus or train to visit friends or family in another city, but it need not result in an increase in unsafe behavior such as binge drinking, unsafe sex, or texting while driving. Children following a slow life history strategy can still remain cautious with an eye to the future, but success in the future also favors the bold, and children and teens can be adventurous without being foolhardy if given the chance.

What’s an Educator to Do?

Children’s natural way of learning is through play. They learn important skills and social behaviors “in context,” mostly through observation and by doing. Hunter-gatherers do not need to learn how to write or do calculus, (p.262) so it should not be surprising that different pedagogical techniques are needed to learn evolutionarily novel skills, such as reading and mathematics, and few children acquire such skills spontaneously without the need for direct instruction. But even if direct instruction is the best way to learn many of the technological skills of WEIRD cultures, it does not mean that this is how all children should be educated all the time, especially young children. Studies with monkeys by the primatologist Harry Harlow and with human infants by the developmental psychologist Hanus Papousek, showed that starting a learning task too early (for Harlow’s monkeys, beginning a discriminating-learning task at 60 days; for Papousek’s infants, starting an operant-conditioning task at birth), actually slows down learning compared with monkeys and infants who start the task at a later age.115 In a similar vein, preschool children who are given direct instruction, similar to that used in elementary schools, rarely learn more than children encouraged to discover new facts and skills through play, and they are more stressed and like school less than children who attend developmentally appropriate preschools. Preschool educators may be pressured by parents to have their children reading and knowing their math facts by the time they start kindergarten, but they risk long-term costs in exchange for questionable short-term benefits.

And discovery learning need not end with preschool. Children of all ages learn best through exploration and discovery, even when some instruction is required. When instruction is necessary, it should be compatible with children’s intuitive learning biases related to folk psychology, folk biology, and folk physics (see discussion in Chapter 1), which are well suited to the niche of early childhood.116

Other scholars contend that discovery learning should be the principal means of education for children through high school. For example, in one retrospective study, evolutionary psychologists Kathryne Gruskin and Glenn Geher117 interviewed 361 college students about aspects of their elementary school experiences that were consistent with evolutionary theory (for example, academic, playful, and collaborative interactions with different-age peers; free play; hands-on learning; explicit real-world applications for learning) as well as those that were not (for example, teacher lecturing; learning from textbooks and workbooks; assessment based on testing) and related them to subsequent academic performance (for example, grade point average in middle and high school; enjoyment of middle and high school). They reported statistically significant positive correlations (ranging from .20 (p.263) to .37) between evolutionarily relevant elementary education practices and subsequent middle- and high-school grade point averages and enjoyment of middle school and high school, supporting the contention that evolutionarily relevant early education may lead to subsequent success in secondary education.

Peter Gray, a staunch advocate for education at all ages being based on hunter-gatherer lifestyles, describes the success of students from 4 years old through high-school age at the Sudbury Valley School, which is modeled after hunter-gatherer childhoods. Like hunter-gatherers, Sudbury Valley School children are solely responsible for their own education. Children are provided with a mixed-age, supportive, and opportunity-rich environment that they can explore as they wish, with adult staff members (they are not called teachers) available to children who request assistance. These practices reflect the way children have learned for thousands of years, consistent with the evolved neurocognitive architecture that allows such incredible amounts of learning during childhood.118

Another program, the Regents Academy, used cooperative small groups, also characteristic of ancient human environments, as the basis for a high-school intervention program for academically at-risk 9th and 10th graders in Binghamton, New York.119 The program encouraged group cohesion through consensus decision-making and appropriate individual and group autonomy and accountability. Teenagers who participated in this program performed better than matched control children in regular school, were no different academically from not-at-risk children enrolled in the same school, and scored just as well as the average high-school student on state-mandated exams of algebra and English.

Bullying is a common problem in WEIRD schools across nations. Although bullying is also found in traditional and hunter-gatherer cultures,120 the structure of modern schools reflects a substantial mismatch from traditional environments, making bullying especially common: approximately half of U.S. adolescents reported being involved as a victim or perpetrator of bullying within a two-month period.121 Antibullying programs have had mixed success in reducing bullying, with most programs that involved zero-tolerance or empathy training having little or no success, especially for adolescents. One possible reason for the failure of many of these programs is that they fail to recognize that bullying provides some benefits for the bully in terms of reputation and status, particularly for the social-conscious adolescent. An evolutionary perspective views bullying as (p.264) being based on evolved mechanisms in which some individuals are motivated to engage in aggressive goal-directed behavior when benefits outweigh costs.122 Consistent with evolutionary theorizing, programs that increase the costs or reduce the benefits of bullying are more apt to report reductions in bullying.123 One pilot program with 7th- and 8th-grade students in the United States that was explicitly designed following evolutionary principles is the Meaningful Roles Intervention, in which bullies are given high-visibility, meaningful roles and responsibilities as part of a school jobs program. In this program, bullies were initially paired with highly competent and socially accepted students and given high-status jobs to perform, such as being a door greeter, parliamentarian (looks up and interprets the rules), or photographer. Students wrote “praise” notes, recognizing their peers’ prosocial behavior. Over the course of a year, the incidence of fighting, injuries/illness, absences, and detentions all decreased significantly in the school compared with the previous year as a result of the program. Bullies were now gaining status and praise through prosocial behavior, which was associated with a reduction in aggressive behavior.124

Mismatched Youth

Evolutionary mismatches are abundant for modern humans. According to physician and nutritionist Brandon Hidaka, “In effect, humans have dragged a body with a long hominid history into an overfed, malnourished, sedentary, sunlight-deficient, sleep-deprived, competitive, inequitable, and socially-isolating environment with dire consequences.”125 Not all the stress, mood disorders, and loneliness associated with modern culture can be attributed to evolutionary mismatches, but many can, and some are associated with specific periods of development. Children and adolescents evolved age-appropriate adaptations that did a good-enough job of getting them through their particular time in life and preparing them for adulthood. The rate of cultural change has outstripped the rate of biological change since Homo sapiens became a sedentary species about 10,000 years ago, and both childhood and adulthood have changed, most would say for the overall benefit of the species, substantially in the ensuing millennia. Neural and behavioral plasticity is greatest early in life, and children and adolescents have been able to make adjustments to the many mismatches from their evolved adaptations. There have been costs associated with these mismatches, however, with children (p.265) and adolescents increasingly suffering from mood disorders and, recently, being unprepared for adulthood. As parents, educators, and policy-makers, we can recognize the problems associated with evolutionary mismatches in development and perhaps make the life of children and the adults they will become a bit happier.


(1.) Skenazy. (2008, April 8). “Why I Let My 9-Year-Old Ride the Subway Alone.” The New York Sun. Retrieved August 13, 2019. https://www.nysun.com/news/why-i-let-my-9-year-old-ride-subway-alone

(3.) Schön, 2007

(5.) Bogin, 2006; see Chapter 4

(6.) Roser, M. Fertility rate, Our World in Data. Retrieved August 16, 2019. https://ourworldindata.org/fertility-rate

(7.) Cristakis, E. (2016, October 28). My Halloween email led to a campus firestorm—and a troubling lesson about self-censorship. The Washington Post. Retrieved August 16, 2019. https://www.washingtonpost.com/opinions/my-halloween-email-led-to-a-campus-firestorm--and-a-troubling-lesson-about-self-censorship/2016/10/28/70e55732-9b97-11e6-a0ed-ab0774c1eaa5_story.html

(9.) Twenge, 2017, p. 4

(10.) Twenge, 2017, p. 47

(11.) Twenge, 2017, p. 45

(14.) See, for example, chapters in Rubin, Bukowski, & Laursen, 2018

(22.) Geidd, in Sercombe, 2014, p. 62

(24.) Taylor, K., & Silver, L. (2019). Anderson & Jiang (2018).

(25.) Twenge, 2017, Chapter 2

(27.) Tinbergem, 1951; Tinbergen & Perdeck, 1950

(28.) Sbarra, Biskin, & Slatcher, 2019

(36.) See meta-analysis by Bowler et al., 2010, and brief review by Schertz & Berman, 2019; see also Harper, 2017, for a discussion of Western parents’ aversion to children’s risky play.

(39.) Data from Twenge & Campbell, 2018; figure from Twenge, 2019

(45.) Heffer et al., 2019

(53.) From Jones, 1986. See also Baumeister & Leary, 1995.

(54.) Kessler, Üstün, & World Health Organization, 2008; Weissman et al., 1996

(56.) Twenge, 2017, p. 176

(61.) Lancy & Grove, 2010, pp. 164–165

(64.) Wood, Bruner, & Ross, 1976

(69.) Gray, 2013, 2016

(75.) Vygotsky, 1933, p. 102, as cited in Elias & Berk, 2002, p. 219

(82.) Berk & Myers, 2013

(83.) Gray, 2013, p. 5

(84.) See Bjorklund, 2007b, for a discussion of the effects of direct-instruction preschools versus developmentally appropriate preschools on children, social, emotional, and cognitive.

(94.) Gray, 2013, 2016

(95.) Piaget, 1977, cited in Rogoff, 1998, p. 38

(96.) Geary, 2007, p. 43, italics in the original

(125.) Hidaka, 2012, p. 211