Book Review: Not Born Yesterday by Hugo Mercier

There are two core arguments put forward in Not Born Yesterday, the latest book by cognitive scientist Hugo Mercier. The first argument, one which I believe has a great deal of truth to it, is hinted at by the title—the gullibility of the average person has been greatly exaggerated. Human beings are discriminating figures, capable of evaluating contradictory messages from self-interested interlocuters and protecting themselves from the deceptions of others. That seems right to me.

The second argument, far less persuasive in my view, is that this epistemic vigilance makes deception from others relatively rare and unsuccessful for persuasive purposes. Here’s a nice quote from the book which summarizes Mercier’s perspective,

We aren’t gullible: by default we veer on the side of being resistant to new ideas. In the absence of the right cues, we reject messages that don’t fit with our preconceived views or preexisting plans. To persuade us otherwise takes long-established, carefully maintained trust, clearly demonstrated expertise, and sound arguments.

I think the first sentence is broadly right, but as I will explain, the next two sentences have a number of assumptions baked into them which lead Mercier to underestimate how effective self-interested manipulation and deception can be. Mercier says, “The multiple mass persuasion attempts witnessed since the dawn of history—from demagogues to advertisers—are no proof of human gullibility. On the contrary, the recurrent failures of these attempts attest to the difficulties of influencing people en masse.”

According to Mercier, national propaganda might signal the power of the state, for example, but it is unlikely to actually convince people of its message. However, notice the problem of scale here—we did not evolve in contexts of frequent messaging from competing strangers through technologically advanced mass media. Whether or not various mass persuasion endeavors were effective at persuading the average citizen in a modern state society seems to me to have little bearing on the question of the evolution of our persuasive or deceptive abilities. It does, however, offer some support for the argument about people not necessarily being easily swayed or intrinsically gullible by just any communicative signal put to them.

The arguments about gullibility in the book tend to be more carefully considered than the ones about the lack of deception, and Mercier sometimes finds himself having to make rhetorical sacrifices to the latter argument in order to stay on firm ground with the former. For example, Mercier writes that,

Even when mass persuasion bears on the message itself, it can reach its goal without entailing any gullibility on the part of the audience. Many of the products we buy exist in virtually identical versions—different brands of soda, toothpaste, detergent, cigarettes, milk. In these conditions, it is only normal that our minds should respond to minor nudges: the position on the shelf, a tiny discount, or even an appealing ad. Shifts between these essentially identical products may be of no import for consumers, while making a huge difference for companies. Some ads can be cost-effective without any genuine persuasion having taken place.

I think Mercier is right in noting that just because people seem to respond to minor social cues that doesn’t make them gullible, and it can be perfectly in their interests, or at least non-costly, to do so, but this fact also clearly conveys opportunities for manipulation or deception in the space of “virtually identical” products, with Mercier forthrightly saying it is, “only normal that our minds should respond to minor nudges: the position on the shelf, a tiny discount, or even an appealing ad,” in such contexts.

Mercier writes that,

Deceit is cognitively demanding: we have to think of a story, stick to it, keep it internally coherent, and make it consistent with what our interlocutor knows. By contrast, negligence is easy. Negligence is the default. Even if we are equipped with cognitive mechanisms that help us adjust our communication to what others are likely to find relevant, making sure that what we say includes the information our interlocutor wants or needs to hear is difficult.

I don’t think this is necessarily wrong although trying to distinguish intentional deceit from negligence can be tricky—someone can be providing you with self-serving information despite knowing it’s false, or be providing you with self-serving information and simply not caring whether it is false—and there is surely a whole spectrum of variation with different deceivers exhibiting varying degrees of awareness of how accurate or not the information they communicate may be. The example Mercier gives, instead of clarifying things, compounds the difficulty of differentiating intentional deception from negligence. He writes that,

But deceit is not the only, or even the main, danger of communication. Imagine that you are looking to buy a used car. The salesman might lie to you outright: “I have another buyer very keen on this car!” But he is also likely to give you misguided advice: “This car would be great for you!” He might believe his advice to be sound, and yet it is more likely to be driven by a desire to close the sale than by a deep knowledge of what type of car would best suit your needs. Now you ask him: “Do you know if this car has ever been in an accident?” and he answers “No.” If he knows the car has been in an accident, this is bad. But if he has made no effort to find out, even though the dealership bought the car at a suspiciously low price, he is culpable of negligence, and this isn’t much better. In the case at hand, whether he actually knew the car had been in a crash, or should have known it likely had been, makes little difference to you. In both cases you end up with a misleading statement and a lemon.

Note that in this example there are still explicit advantages to knowing what you don’t know. The negligent car salesman who happens to not know the history of some of his cars is still at a competitive disadvantage with the car salesman who knows what he doesn’t know and how to make strategic use of that fact, adjusting his behavior accordingly. So instead of clearly demonstrating negligence is a more important factor than deception, this example makes it seem to me more of a spectrum of variation than necessarily competing explanations for social manipulation, where asymmetries of knowledge are taken advantage of by self-interested parties who use various strategies, including negligence and intentional deception, to do so.

Other examples Mercier gives also indicate deception has been historically more important and prevalent than Mercier suggests it has been. Mercier begins the book with an anecdote about being swindled by a con man pretending to be a doctor,

AS I WAS WALKING BACK from university one day, a respectable-looking middle-aged man accosted me. He spun a good story: he was a doctor working in the local hospital, he had to rush to some urgent doctorly thing, but he’d lost his wallet, and he had no money for a cab ride. He was in dire need of twenty euros. He gave me his business card, told me I could call the number and his secretary would wire the money back to me shortly. After some more cajoling I gave him twenty euros. There was no doctor of this name, and no secretary at the end of the line. How stupid was I? And how ironic that, twenty years later, I would be writing a book arguing that people aren’t gullible.

Later on Mercier offers an explanation for this event, in conjunction with his discussion of the first person identified to be a con man. Mercier writes that,

The first man to be called a con man was Samuel Thompson, who operated around 1850 in New York and Philadelphia. He would come up to people, pretend to be an old acquaintance, and remark on how people did not trust each other anymore. Making his point, he would wager that the mark wouldn’t trust Thompson with their watch. To prove him wrong, and to avoid offending someone who appeared to be a forgotten acquaintance, some people would give Thompson their watch, never to see him or their watch again. Thompson relied on his “genteel appearance” (a coarse cue indeed) to pressure his victims: they might not have trusted him altogether, but they feared a scene if they blatantly distrusted someone of their own social standing. This is how the fake doctor from the introduction got me to give him twenty euros. Once you accept the premise that someone is who they say they are, a number of actions follow logically: had that person been a real doctor, I should have been able to trust him with the money. And rejecting the premise, saying to someone’s face that we think they are a fraud, is socially awkward.

In my view this paragraph actually helps us understand why manipulation and deception are common, not rare. It is clearly embedded in the assumptions Mercier conveys here about doctors, and the description of why Thompson and the fake doctor’s deceptions worked—people are often reluctant to question or challenge widely agreed-upon social norms and conventions, understandably fearing a loss of status and reputation: in every society there are individuals who are willing and able to take advantage of this fact to gain benefits for themselves. This is a topic I have discussed at length on this website.

One thing evolutionary accounts of human history often miss is the commonality of sorcerers and secret societies throughout our history, attesting to the importance of social manipulation and how self-interested parties take advantage of asymmetries of information. Historian Daniel Jütte writes that, “In contemporary Western societies, frequently invoked categories like “the public” and “openness” are critical for shaping our ideals about how people should live together and how knowledge should be produced. As a result, we greatly underestimate the importance of secret forms of knowledge both in the premodern world and in contemporary non-Western societies.”

As I discussed previously, Mande blacksmith clans in West Africa offer one salient example of the importance of secret forms of knowledge in smaller-scale societies. Anthropologist Patrick McNaughton writes that,

Nearly everyone believes that members of these special clans possess a mysterious spiritual power that underpins occult practices and makes the people possessing it potentially dangerous. These powers go well beyond the practice of the clan's special trade, but they are also considered essential to anyone who takes it up. Often members of these clans go to great lengths to nourish a belief in their power among the rest of the population. Indeed, they generally believe in it themselves. Furthermore, they say they are born with much of this power. It is part of their heritage and one of the things that makes them so different from everyone else. That too creates a profound handicap for any outsider who might want to earn a clan's special trade.

I tend to think human’s ‘epistemic vigilance’, far from disincentivizing social manipulation and deception, has contributed to the evolution of more elaborate and extravagant forms of them. In his paper on the cultural evolution of shamanism, anthropologist Manvir Singh writes,

I propose that shamanism is a suite of practices developed through cultural evolution to convince observers that an individual can influence otherwise uncontrollable out-comes. In particular, the shaman is an individual who violates intuitions of humanness to convince group members that he or she can interact with the invisible forces who control unpredictable, important events.

Singh, like Mercier, disagrees with simple narratives that focus narrowly on viewers gullibility to charlatans, but Singh’s framework demonstrates how competition among shamans, effectively the ‘doctors’ of many small-scale societies, leads to more elaborate performances to persuade viewers of supernatural power.

“When outcomes of uncertainty are controlled by invisible forces, cultural selection will favor individuals who claim special abilities of interacting with those forces.” Figure from ‘The cultural evolution of shamanism’ (2017) by Manvir Singh.

“When outcomes of uncertainty are controlled by invisible forces, cultural selection will favor individuals who claim special abilities of interacting with those forces.” Figure from ‘The cultural evolution of shamanism’ (2017) by Manvir Singh.

Now, let us revisit Mercier’s conclusion—"In the absence of the right cues, we reject messages that don’t fit with our preconceived views or preexisting plans. To persuade us otherwise takes long-established, carefully maintained trust, clearly demonstrated expertise, and sound arguments.”

Another way of putting it is, people are credulous to views that do fit their “preconceived views or preexisting plans,” and this leaves room for deceivers to take advantage of such biases and incentives. Further, if individuals provide some signal that seems to indicate their impressive power, or knowledge, or status, people do exhibit an increased willingness to believe them.

I think Mercier is right about people not being intrinsically gullible, and having a variety of cognitive tools at their disposal for evaluating trustworthiness, but the conditions for people to develop and perpetuate costly beliefs which may cause them harm, or beliefs that benefit some people at the expense of others and are promoted through self-interested deception, are much more common than conveyed in the book.