Abstract and Keywords
Against the background of rational choice theory, this chapter provides an overview of the behavioral sub-disciplines informing behavioral law and economics—including judgment and decision-making studies, parts of social psychology, moral psychology, experimental game theory, and behavioral ethics. The chapter discusses deviations from cognitive and motivational rationality, including studies of people’s moral judgments. It begins with probability assessments and related issues. It critically describes phenomena related to prospect theory, phenomena associated with motivated reasoning and egocentrism, and those related to reference-dependence. It also summarizes studies of bounded willpower. Some attention is given to studies that show that most people do not share the consequentialist outlook that prioritizes the maximization of human welfare over all other values. Finally, the chapter discusses several issues that cut across various phenomena: individual differences in judgment and decision-making; the significance of professional training, experience, and expertise; deciding for others; group decision-making; cultural differences; and debiasing.
Keywords: heuristics and biases, dual-system theories, availability, hindsight bias, prospect theory, loss aversion, framing, omission bias, motivated reasoning, overoptimism, behavioral ethics, pro-social behavior, individual differences, group decision-making, debiasing
A. An Overview
1. History, Methodology, and Interdisciplinary Impact
While it has antecedent intellectual roots, the psychological research of human judgment and decision-making (JDM) has mainly evolved since the 1950s—in part in response to the expected utility theory put forward by John von Newman and Oskar Morgenstern.1 While economists have tended to view expected utility as both a normative and a descriptive model of human preferences and choices (and many of them still do), psychologists from early on have focused on experimentally questioning the descriptive validity of expected utility theory.2
As we shall see below, both the reference to expected utility theory as a normative benchmark (implying that deviations from it are “biases”), and the extensive resort to laboratory experiments, have been the subject of some controversy. Nevertheless, the bulk of JDM studies, including those that have had a particular impact on legal research and policymaking, share these characteristics. Thousands of studies have identified a long list of biases—such as the availability heuristic, self-serving biases in recalling information, and bounded willpower—in performing tasks.
Throughout the years, JDM studies have gradually expanded in scope and methodology, blurring the borders between them and other spheres of psychology, including studies of emotions, learning, and memory. In particular, there is considerable overlap (p.20) between JDM and social psychology, the study of the influence of other people’s (actual and imagined) presence on people’s thoughts, feelings, and behavior.3 Notable progress in JDM studies is also due to methodological innovations. In addition to laboratory experiments—which are still the most prevalent methodology—JDM studies employ field experiments and analyze the results of natural experiments that shed light on human judgments and choices. A rapidly growing body of neuropsychological studies based on functional magnetic resonance imaging (fMRI) and similar techniques is opening up new frontiers in understanding the neural underpinnings of cognitive processes.4
Finally, in addition to the links and overlap between JDM and other spheres of psychological research, there is an ongoing dialogue between JDM research and other disciplines dealing with human behavior, such as economics,5 finance,6 political science,7 and law.8 These dialogues have been extended following the introduction of experimental methodologies into economic, legal, and even philosophical studies.9 The important contribution of psychological studies to economics was recognized in 2002, when Daniel Kahneman won the Nobel Prize in economics. The powerful impact of those studies on legal analysis is reflected throughout this book.
As previously noted, the borderlines between JDM and other spheres of psychological research, and between the psychological and other perspectives on human decision-making, are blurred. However, as our focus is not on JDM and the law, but rather on behavioral law and economics, drawing these borderlines is not important for our purposes. Instead, we shall discuss the main findings that are relevant to behavioral law and economics, regardless of whether or not they belong to JDM stricto sensu. Thus, in addition to deviations from the assumptions of cognitive, or “thin” economic rationality—that is, the formal elements pertaining to the structure of people’s preferences (such as transitivity) (p.21) and people’s strategy of decision-making—we are interested in systematic deviations from motivational, or “thick” rationality, namely the assumption that people only seek to maximize their own utility.10 Numerous studies, by psychologists and experimental economists alike, have shown that maximizing one’s utility is not the only motivation that drives people: people also care about the welfare of other people, act out of envy or altruism, and show commitment to values of reciprocity and fairness.11 More recently, much attention has been given to people’s moral judgments, as well as to automatic psychological processes that lead ordinary people to violate moral and social norms.12
Before turning to specific psychological phenomena, this overview discusses a few general themes, including dual-process theories, theories of heuristics and biases, and the challenges posed to JDM by the approach known as Fast-and-Frugal Heuristics.
2. Dual-Process Theories
Dual-process theories posit that there is more than one way in which people perceive information, process it, and make decisions.13 Originally coined by Keith Stanovich and Richard West, and subsequently adopted by Kahneman, the terms System 1 and System 2 have gained great popularity in describing human judgment and decision-making.14
System 1 operates automatically and quickly, with little or no effort, and with no sense of voluntary control. It is commonly described as spontaneous, intuitive, associative, context-dependent, and holistic. It uses mental shortcuts, or heuristics, that people learn through personal experience, or even on innate ones. In contrast, System 2 involves effortful mental activity. It is conscious, deliberative, and analytic—and thus also slow and exacting. It employs rules that are explicitly learned. System 1 thinking is used in most of our daily tasks—such as identifying familiar faces and recognizing other people’s strong emotional reactions, driving a car (when the road is relatively empty), understanding simple sentences, and answering trivial math questions. Examples of tasks involving System 2 thinking include answering complex math questions, finding an address in an unfamiliar neighborhood, and writing this paragraph.
Some accounts of dual-process thinking link it to the role played by emotions in decision-making, maintaining that emotional reactions influence decision-making through System 1.15 This claim is part of a large body of research about the impact of emotions on (p.22) judgment (including moral judgment) and decision-making.16 There is also some evidence that separate regions of the brain are involved in the different types of cognitive processes.17 Relatedly, there are hypotheses about the evolution of the two systems in humans and animals—essentially, that System 2 is uniquely, or characteristically, human.18
The use of System 1 heuristics and shortcuts is inevitable, given the endless stimuli that we are constantly exposed to, and the huge number of decisions we make every day. System 1 is usually very effective. However, it also results in systematic and predictable deviations from the axioms of rational decision-making, which are known as cognitive biases.
In general, the speedy and autonomous nature of System 1 processes makes it dominant a priori—that is, it controls behavior by default, unless analytical reasoning intervenes.19 While System 2 may intervene when System 1 leads to suboptimal results, people usually stick to their intuitive System 1 choices, and use System 2 chiefly to provide justifications for those choices.20 In this respect, Stanovich has proposed distinguishing between the reflective and the algorithmic components of System 2. The so-called reflective mind determines whether System 1 is interrupted and suppressed by System 2; and when it is, the algorithmic mind processes the information and makes the deliberative and analytic judgment or decision. Unless the higher-level, reflective mind intervenes and the algorithmic mind comes up with a more rational, accurate, and consistent judgment/decision than the one provided by System 1, the cognitive biases of System 1 prevail.21 Obviously, these constructs are simplified accounts of what may well be much more complex processes in reality.22
People vary in their disposition to use an analytic, rather than intuitive, mode of thinking. One test that is often employed to measure people’s disposition in this regard is the cognitive reflection test (CRT). The CRT includes questions such as: “A bat and a ball (p.23) cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?” The first answer that comes to mind is 10 cents, but a moment’s reflection shows that the right answer is 5 cents. People who give the former answer display a lesser disposition for cognitive reflection than those who give the latter.23 Originally comprising only three questions, the CRT has subsequently been expanded by adding more questions.24 Another test of the tendency to engage in effortful cognitive endeavors is the need for cognition scale (NCS), which comprises a relatively long list of self-characterization statements, such as: “I find satisfaction in deliberating hard and for long hours.” Subjects indicate the extent to which each statement characterizes them, and are assessed based on the aggregation of their replies.25
Whether a person uses one system or the other in any particular context is not only a matter of personal disposition, but a function of training and experience as well. Tasks that initially require conscious effort—such as speaking a foreign language or driving a car—may become automatic and effortless over time. The use of one system or the other also depends on the cognitive resources available to the decision-maker at the time of making the decision. Thus, when people make extensive use of their controlled deliberation resources, the resulting resource depletion may result in more intuitive, System 1 decision-making in a subsequent, unrelated task.26
Notwithstanding the shortcomings of System 1’s heuristics, it should be noted that sometimes System 1 produces more accurate decisions than System 2, and people often consciously use simple heuristics (rather than exacting, analytical thought-processes) in their judgments and decision-making.27 The bright and dark sides of heuristics are further discussed below.28
3. Theories of Heuristics and Biases
Several theories have been proposed to explain how the heuristics used by System 1 operate, and why they result in systematic errors. One such model is attribute substitution. It posits that many heuristics “share a common process . . . in which difficult judgments are made by substituting conceptually or semantically related assessments that are simpler and more readily accessible.”29 For example, when a person is asked which of two events is more (p.24) probable, she might substitute it with the simpler question: “Instances of which event come more readily to mind?” (the availability heuristic).30 Similarly, when asked about the probability that something belongs to a certain category, one might substitute for the question a simpler one: “How similar is it to a typical member of that category?” (the representativeness heuristic).31 Thus, in one study students were asked how happy they were with their lives in general, and how many dates they had had in the previous month. When the two questions were asked in that order, there was almost no correlation between the answers; but when the dating question was asked first, there was a high correlation between the two—presumably because the answer to the dating question became the heuristic attribute in answering the global happiness question.32
According to a more elaborate model, attribute substitution is but one of five effort-reduction mechanisms—the other four being (1) examining fewer cues, (2) simplifying the weighting principles for cues, (3) integrating less information, and (4) examining fewer alternatives.33 According to this model, the five mechanisms can be used separately, or in combination with each other.
Another mechanism (overlapping some of the aforementioned ones) that may account for certain heuristics is the isolation effect—namely ignoring anything that is not within one’s immediate field of consciousness. The isolation effect may explain, for example, why people who are normally risk-averse continue to gamble with money they have just won, or why thrifty people spend lottery gains on luxury items. In both cases, they are isolating the present decision from the overall picture.34 Relatedly, Kahneman proposed the acronym WYSIATI (for What You See Is All There Is) to describe System 1’s tendency to jump to conclusions based on the immediately available information, while neglecting all other information.35
The last explanation to be mentioned here is overgeneralization. People may follow useful judgment- and choice-rules even when the rationale of those rules do not, or no longer, apply—thus making systematic mistakes. For example, an overgeneralization of the useful heuristic Do not waste may lead to escalation of commitment—that is, the inability to disregard sunk costs and to make decisions regarding the investment of additional resources based on the future costs and benefits of that investment only.36 Similarly, people who have (p.25) learned to trust the power of their own and other people’s vision more than their or other people’s power of deduction tend to give more weight to direct evidence, such as eyewitness testimony, than to inferences from circumstantial evidence. They may do so even when the circumstantial evidence is as conclusive as direct evidence, or more so.37
4. Cognitive Biases versus Fast-and-Frugal Heuristics
Much of JDM research, particularly the early studies that had a strong impact on economics and legal theory, has developed through the documentation of specific errors of judgment in laboratory experiments, taking economic rationality as the normative benchmark—and only then looking for their causes, if at all. Experimentally studying a system’s failures can provide valuable insight about its successful functioning.38 However, several important critiques have been leveled against this type of research, notably by Gerd Gigerenzer and his colleagues, who advocate the fast-and-frugal approach as an alternative to the heuristics-and-biases research program, associated with Daniel Kahneman and Amos Tversky.39
One critique is that the heuristics-and-biases program is lacking in ecological validity, due to the differences between laboratory experiments and real-life decision-making. Specifically, it has been argued that at least some of the laboratory experiments highlighting people’s cognitive biases involve abstract tasks that are quite different from the tasks that people face in their daily life. For example, in one of the famous demonstrations of the confirmation bias,40 participants were presented with four cards, two with letters and two with numbers. They were told that each card had a letter on one side and a number on the other, and were asked to indicate which cards they would need to turn over in order to find out whether the following rule is true: if a card has a vowel on one side, then it has an even number on the other side. Most participants incorrectly suggested turning over cards that would confirm the rule, when in fact the correct answer is to choose the combination that could potentially falsify it. However, when the same task was presented in a real-life, social context, rather than in abstract terms, participants were found to do much better.41
(p.26) Another critique of the heuristics-and-biases school is that it tends to paint a bleak picture of human decision-making being systematically irrational and fallible (and then, possibly, look for debiasing techniques), when in fact people do remarkably well under most ordinary circumstances. Unlike the well-defined conditions of laboratory experiments, in real life people have partial information about an uncertain world. Rather than maximizing their utility according to rational choice theory, what they need—and actually use—are effective and frugal decision algorithms that function well under real-life circumstances.42 In fact, simple heuristics, based on limited information, may do better than complex decision rules that incorporate more information. For example, it was found that German students who were asked to judge which of two U.S. cities was larger, and who used the heuristic that the more familiar city is larger, did better than U.S. students who had more information about those cities (and vice versa).43 Such heuristics may be consciously adopted.44
Finally, researchers of the fast-and-frugal-heuristics school also tend to emphasize the adaptive advantages of heuristics that developed when humans were mostly hunter-gatherers, and to criticize heuristics-and-biases research for producing a long list of biases with little understanding of the underlying psychological processes.45
Delving into these controversies is beyond the scope of the present discussion. While some of the critiques of the heuristics-and-biases school are well taken, others seem less and less compelling as heuristics-and-biases scholars formulate broader theories of the cognitive processes underlying biases (some of which were described in the previous subsection), and pay more heed to the ecological validity of their findings. Generally speaking, the differences between the two schools appear to be much smaller than scholars of the fast-and-frugal school tend to portray. As Ulrike Hahn and Adam Harris have succinctly pointed out, whether one should emphasize the “adaptive rationality” of using effective heuristics, or the predictable and systematic errors produced by those heuristics, is like asking whether a glass is half empty or half full.46 Moreover, some of the issues, such as that of the evolutionary roots of heuristics, are hardly resolvable by scientific means, and at any rate do not necessarily bear upon the use of behavioral insights by jurists and legal policymakers (which is the focus of this book). Moreover, legal policymakers understandably tend to focus on people’s judgment and decision-making, rather than on their underlying psychological processes.
One of the critiques commonly leveled against behavioral studies is that they produce a long list of heuristics and biases, rather than a simple, unifying model of judgment and decision-making of the sort provided by rational choice theory.47 Inasmuch as this is due to the focus on deviations from economic rationality, especially in earlier JDM studies,48 perhaps doing away with this benchmark—as advocated by the fast-and-frugal school—would bring about greater clarity and coherence.49 Ultimately, however, there is an inevitable trade-off between descriptive validity and simplicity. Human psychology is too complex to be captured by a simple theory. As Kahneman has put it, “life is more complex for behavioral economists than for true believers in human rationality.”50 As long as one does not treat the behavioral outlook as a substitute for economic and other perspectives on legal and policy issues, but rather as a complementary and corrective measure, the absence of a unifying theory and our limited understanding of the underlying psychological processes of decision-making are less of an issue.51
Such modesty notwithstanding, some classification of heuristics and biases is essential, if only for expositional purposes. A notable proposal of such a classification has been put forward by Jonathan Baron.52 Baron classifies the myriad heuristics and biases into three major categories. The first category is of biases of attention. It comprises three subcategories: (1) availability, attention to here and now, easy, and compatible information; (2) heuristics based on imperfect correlations (such as the hindsight bias and omission bias); and (3) focus on a single attribute to the exclusion of others. The second category involves biases that stem from the effects of goals and desires on perceptions and information processing (such as wishful thinking). The last category concerns the relationship between quantitative attributes and their perception, including diminishing sensitivity to changes in gains, losses, and probabilities.
As Baron readily concedes, his classification is suggestive rather than definitive. The complex interrelations between the various phenomena make any attempt at classification rather challenging. Fortunately for us, we do not need to offer such a classification. This chapter does not purport to provide a systematic survey of all behavioral findings,53 or even of those that might be relevant to behavioral law and economics. Rather, it focuses (p.28) on phenomena whose understanding is necessary for the ensuing analyses. Additional phenomena, which are uniquely relevant to specific legal issues (or whose broader significance has not yet been realized), will be discussed apropos of those issues.
This goal shapes the structure of the remainder of this chapter. Given our perspective of behavioral law and economics, we distinguish between deviations from thin, cognitive rationality (Sections B–E), and deviations from thick, motivational rationality, including studies of moral judgments (Section G).
Within the former category, the chapter first discusses probability assessments and related issues (Section B), and then preferences and decisions. The latter category is divided into phenomena related to prospect theory—arguably the most influential theory in behavioral economics (Section C), phenomena associated with motivated reasoning and egocentrism (Section D), and those related to reference-dependence and order effects (Section E).
Section F discusses bounded willpower and procrastination. These phenomena do not fit squarely into either the cognitive or motivational rationality constructs—although they are closely connected to both.
While economic analysis normatively prioritizes the maximization of overall human welfare over other values, it descriptively assumes that people are rational maximizers of their own welfare. Section G describes studies that show that most people neither share this normative outlook nor conform to the descriptive assumption.
Finally, Section H discusses several issues that cut across the phenomena described in the previous sections. These are: individual differences in judgment and decision-making; the significance of professional training, experience, and expertise; deciding for others; group decision-making; cultural differences; and debiasing.
B. Probability Assessments and Related Issues
Many of the early groundbreaking studies in JDM have dealt with frequency and probability assessments, statistical inferences, and perceptions of risk and uncertainty. This section surveys the main findings in this sphere.
1. Conjunction and Disjunction Fallacies
One of the most famous (and controversial) characters in JDM is Linda. As described in a classic experiment conducted by Tversky and Kahneman, “Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.”54 In one version of the experiment, subjects were asked which of the following alternatives is more probable: “Linda is a bank teller,” or “Linda is a bank teller and is active in the feminist movement.” According to the conjunction rule, the probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the former is included in each of the latter. Nevertheless, in what Tversky and (p.29) Kahneman dubbed a “flagrant violation of the conjunction rule,” 85 percent of respondents indicated that it was more probable that Linda was a bank teller and an active feminist than a bank teller.55 Additional experiments demonstrated that this logical error occurs in other experimental designs and in other areas (such as predicting the outcomes of sport contests). It is committed, to varying degrees, by students who have studied advanced courses in statistics and decision theory, and by experts making assessments within their area of expertise. It occurs even when financial incentives are given to give the right answer, and despite people’s ability to understand the conjunction fallacy.56
Tversky and Kahneman sought to explain the conjunction fallacy, as well as other biases in probability assessments and related issues, by means of the representativeness heuristic. People who resort to this heuristic assess the probability of an uncertain event by “the degree to which it is: (i) similar in essential properties to its parent population; and (ii) reflects the salient features of the process by which it is generated.”57 In the present context, Linda seems to have the characteristics of a feminist activist, rather than those of a bank teller. Hence, her description as a bank teller and a feminist activist sounds more representative—and hence more probable—than her description as a bank teller, despite the fact that the probability of her being both things cannot logically exceed the probability of her being just a bank teller.
The causes, generality, and very existence of the conjunction fallacy have been questioned, especially by members of the fast-and-frugal school of JDM.58 It has been argued that what looks like a conjunction fallacy is actually a product of the multiplicity of meanings of the terms “probable” (which may refer to mathematical frequency, but also to intensity of belief, and more),”59 and “and” (which in probability theory refers to an intersection, but in natural language may refer to an intersection or to a union of events, as in the expression “Dear friends and colleagues”).60 Without going into details, we note that while the conjunction fallacy is diminished when subjects are instructed to estimate frequencies rather than probability, such instructions do not eliminate the fallacy. Neither is the conjunction fallacy caused by misunderstanding of the meaning of the word “and.” However, some experimental designs, which draw subjects’ attention to the conjunction rule, do eliminate the fallacy.61
(p.30) Further support for the existence of the representativeness heuristic comes from studies of the disjunction fallacy. According to the disjunction rule, the probability of A-or-B cannot be smaller than the probability of A or the probability of B. For example, the probability that a woman, who smoked over a packet of cigarettes a day for many years, died of cancer, cannot be smaller than the probability that she died of lung cancer. Using this and comparable examples, Maya Bar-Hillel and Efrat Neter conducted carefully designed experiments in which subjects were asked to rank the outcomes by their willingness to bet on each one and by their probability. Since the examples were chosen such that the narrower category was somewhat more representative than the broader one (as in the lung cancer example), most subjects violated the disjunction rule in both their willingness to bet on them and in assessing their probability.62 Across ten different descriptions and the two questions, 64 percent of the answers violated the disjunction rule.
2. Base-Rate Neglect
Base-rate neglect refers to estimations of likelihood. It is the tendency to ignore the frequency with which an event occurs, and focus instead on individuating information—rather than integrate the two. More specific information is deemed to be more relevant; hence it dominates less specific information,63 and more vivid and concrete data makes greater impact on inferences than dull and abstract data.64 The following famous example from one of Kahneman and Tversky’s experiments65 demonstrates the phenomenon: Jack is a forty-five-year-old man. He is married with four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues, and spends most of his free time on his many hobbies, which include home carpentry, sailing, and mathematical puzzles. Subjects in one condition in this study were told that Jack was randomly drawn from a pool of people consisting of seventy engineers and thirty lawyers, while subjects in the other group were told that the pool was composed of thirty engineers and seventy lawyers. When asked to estimate what Jack does for a living, subjects paid very little attention to the base rate, and based their assessment almost exclusively on the individuating information.
Along with studies that replicated this result, the robustness, generality, and ecological validity of the base-rate neglect have been questioned over the years. An analysis of many experimental and empirical studies demonstrated that people do not routinely ignore base rates.66 It has been suggested that “a base rate has its greatest impact in tasks that (1) are structured in ways that sensitize decision makers to the base rate, (2) are conceptualized by the (p.31) decision maker in relative frequentist terms, (3) contain cues to base rate diagnosticity, and (4) invoke heuristics that focus attention on the base rate.”67 Thus, when subjects are asked to make a series of assessments, they pay more attention to information that varies from one task to another, than to information that is common to all tasks. Consequently, subjects pay more attention to the base rate when it is manipulated within subject than when it is manipulated between subjects (as in Kahneman and Tversky’s study).68 Similarly, expressing a problem in frequentist terms, rather than as the probability of a single event, elicits correct Bayesian reasoning in the great majority of subjects.69 However, in most real-world contexts, people face only one set of values at a time, and probabilities are often presented in percentages.
Another limitation of some studies of base-rate neglect stems from the questionable assumption that people’s subjective probability assessments equal the stated base rate. People’s assessment of the prior probability may be affected by other (relevant or irrelevant) information, besides the information provided by the experimenter. In such cases, what appears to be a base-rate neglect may actually result from a different subjective assessment of the base rate.70 In general, people’s attention to base rates sensibly depends on the diagnosticity and reliability of the individuating information: the less stereotypical the individuating information is, the more weight is given to the base rate.71 It has also been demonstrated that people neglect base rates to a lesser extent in tasks involving concrete, familiar situations (such as reviewing job applications) than in abstract, unfamiliar ones.72
Along with the complex picture regarding the extent to which people ignore base rates, there is also disagreement over the extent to which people should consider base rates when making probability assessments. The relative weight given to the base rate, in relation to the individuating information, varies from one case to another. To use the lawyer-engineer example, if it is given that Jack had studied law, then although it is possible that he works as an engineer, it would be sensible to give exceedingly low weight to the fact that lawyers constitute only 30 percent of the entire pool. Unfortunately, there is no easy answer to the question of what weight should be given to the base rate under different circumstances, if at all.73
Closely connected to base-rate neglect is the phenomenon known as confusion of the inverse, or the inverse fallacy. Given two events, A and B, people tend to assume—contrary to the Bayes theorem—that the probability of A given B is about the same as the probability of B given A.74 Assume, for example, that a certain medical condition is found in 1 of every 100 people, and that according to a test for diagnosing this condition, which is 90 percent accurate, a person has this condition. Given the base rate, the likelihood that the person actually has the condition is about 8 percent. There is about a 92 percent likelihood that the results of the test are false positive.75 People who estimate that the likelihood of the person having the condition is around 90 percent are not only neglecting the base rate—they are also mistakenly assuming that the probability of the person having the condition, given the positive test results, roughly equals the probability that the results are positive, given that the person has the condition.76 Studies have shown that not only laypeople, but also experts, such as physicians, fall prey to this fallacy.77
4. Insensitivity to Sample Size and Related Phenomena
According to the law of large numbers, the larger the sample, the closer its mean is to the mean of the population as a whole. However, people tend to overestimate the extent to which small samples represent the population from which they are drawn.78 This tendency may lead to various erroneous inferences.
One typical error refers to the assessment of whether a certain sequence of events is random. The law of small numbers (a phrase coined by Tversky and Kahneman)79 leads people to believe that apparently patterned sequences are not random even when they may well be.80 For example, people tend to believe that in a family of six children, the sequence BBBGGG is less likely than the sequence GBBGBG (plausibly because the latter appears to be more representative of a random sequence), when in fact they are equally likely.81 A famous real-world example is the “hot hand” in basketball. Players, coaches, and fans tend (p.33) to believe that a player’s chance of hitting a shot are greater following a previous hit or a few consecutive hits, than following a miss or a few misses. Consequently, players may take more difficult shots after successful attempts, and fellow players are often instructed to pass the ball to the player who has just made several shots. In fact, however, while some players are obviously better shooters than others, a large-scale analysis of the performance of professional and nonprofessional players found no evidence for a positive correlation between the outcomes of successive shots of the same player.82 Similarly, there is some evidence that investors put too much weight on the track record of fund managers, causing them to take suboptimal investment decisions based on a relatively short successful or unsuccessful performance streak, albeit the picture in this sphere is far from clear.83
Another erroneous inference is known as the gambler’s fallacy. When events are known or presumed to be random, as in fair coin tosses, people tend to believe that if something happened more (less) frequently than expected during a given period, it will happen less (more) frequently in the next period, as though there were some kind of corrective mechanism.84 Intriguingly, in experiments conducted by Eric Gold and Gordon Hester, while subjects exhibited the gambler’s fallacy, its incidence was significantly reduced when the coin was switched before the next toss, or was allowed “to rest” a while before it.85 The gambler’s fallacy has been documented in people’s actual behavior outside the laboratory, as well.86
Yet another ramification of insensitivity to sample size is neglect of the fact that considerable deviations from the mean of the entire population are much more likely in small samples than in large ones. This neglect often leads people to look for—and find—alternative explanations for atypical results in small samples. For example, following the finding that small schools are disproportionally represented in the top echelon of successful schools, huge sums of money have been invested in the United States in establishing such schools and splitting large schools into smaller ones. However, it appears that small schools are disproportionally represented not only at the high end of the spectrum, but at the low end as well, simply because there is greater variability among small schools than among large ones.87
A failure to account for natural fluctuations in the data may also lead to false causal inferences. Due to the phenomenon known as regression to the mean, large deviations from (p.34) the mean are relatively rare, and they are usually followed by outcomes that are closer to the mean. To use one of Kahneman’s examples,88 imagine that trainees are praised following exceptionally good performances, and scolded following exceptionally bad ones. Since exceptional performances are, by their very nature, exceptional, they are likely to be followed by more ordinary ones. Trainers may thus incorrectly conclude that reproach is effective and praise is counterproductive.
5. Certainty Effect
According to expected utility theory, a certain increase or decrease in the probability of a given risk or prospect should have the same effect on people’s utility, regardless of the baseline probability. However, as Maurice Allais pointed out in the early 1950s,89 and as Kahneman and Tversky demonstrated in the late 1970s,90 this premise is descriptively incorrect: people give greater weight to outcomes that are considered certain relative to outcomes that are only probable. Thus, most people are willing to pay much more to increase the probability of winning a moderate gain from 90 percent to 100 percent than they would to increase the probability from 40 percent to 50 percent. In the same vein, they would be willing to pay more to reduce the probability of a given risk from 5 percent to 0 percent than they would to reduce it from, say, 48 percent to 43 percent. Put differently, people display diminishing sensitivity to changes in probability as they move further away from the two boundaries: certainty and impossibility.91 The more emotionally salient the relevant outcomes (such as an electric shock versus a monetary loss, or meeting and kissing one’s favorite movie star versus a monetary gain), the more pronounced the certainty effect.92 Various real-world behaviors have been associated with the certainty effect.93
Some of the heuristics and biases described above concern people’s inferences from known probabilities. But how do people estimate probabilities in the first place? Drawing on substantial experimental findings, Tversky and Kahneman argued that people often determine the likelihood of events and the frequency of occurrences according to the ease of recalling (p.35) similar events or occurrences.94 They dubbed this heuristic the availability effect.95 For example, one may assess the frequency of divorce in society by recalling instances of divorce among one’s acquaintances. As Tversky and Kahneman noted, availability is a useful clue for estimating frequency or probability, because there is usually a good correlation between the prevalence of occurrences and the ease of recalling them. Alas, since availability is influenced by additional factors besides frequency, reliance on this heuristic leads to predictable mistakes.96
The exact mechanism behind the availability effect—people’s subjective experience of the ease or difficulty of recalling items, or their actual ability to recall those items within the allotted time—is unclear. While Tversky and Kahneman believed that it is the former, subsequent studies have indicated that at least sometimes it is the latter, and that the two may lead to different outcomes.97 Since the availability heuristic is based on a recall of specific instances, people are more inclined to use it when they are primed to think at a more concrete and less abstract level.98
Among the factors affecting the availability of events or other items, one may mention their familiarity, vividness, and recency. In a classic experiment, subjects listened to a list of names, and were then asked to indicate whether the list contained more men or more women. The lists used in the experiment contained nineteen names of very famous figures of one gender, and twenty names of less famous figures of the other gender. Most subjects erroneously believed that the lists contained more people of the gender represented by more famous people, as those names were easier to recall.99
In another experiment, subjects who were given descriptions of symptoms of a disease that were easier to imagine assessed the likelihood that they would contract that disease higher than did subjects who had been given less easily imaginable descriptions of symptoms—especially when they were asked to actually imagine experiencing those symptoms. In fact, subjects who were asked to imagine difficult-to-imagine symptoms gave lower estimates of the likelihood of contracting the disease than subjects who received (p.36) the same descriptions, without any instruction to try and imagine them.100 More vivid events are often more accessible also because they produce a greater emotional reaction. The availability effect is therefore sometimes connected to the affect heuristic—the automatic, negative or positive, affective response to stimuli that steers people’s judgments and decision-making.101
It follows, then, that actually seeing an event, such as a car accident, has a greater impact on the estimated likelihood of car accidents than merely reading about the accident in a newspaper. However, an extensive and vivid media coverage of events may also significantly affect people’s assessments of frequency (and severity). In this context, Timur Kuran and Cass Sunstein have called attention to the perils of availability cascades—namely, “a self-reinforcing process of collective belief formation, by which an expressed perception triggers a chain reaction that gives the perception [of] increasing plausibility through its rising availability in public discourse.”102 Thus, “availability entrepreneurs” may manipulate the content of public discourse in order to advance their agendas. The resulting mass pressure is likely to result in questionable regulation of particular risks, and a problematic increase in punishment of certain offenses.103
Of course, availability is not the only factor affecting assessments of likelihood,104 and once subjects realize the perils of this heuristic, they can overcome its biasing effect to some extent.105 Concomitantly, availability affects people’s judgment and decision-making in other ways besides its impact on likelihood assessments. For example, it has been found that individual investors are considerably more likely to invest in attention-attracting stocks—“stocks in the news, stocks experiencing high abnormal trading volume, and stocks with extreme one-day returns”—simply because they do not even consider investing in most other stocks.106
In a seminal study, Baruch Fischhoff, Paul Slovic, and Sarah Lichtenstein presented subjects—both laypersons and experienced mechanics—with a list of possible causes for a car’s failure to start, and asked them to assess the frequency of each cause.107 In addition to several specific causes, the list included a residual category of “all other problems.” They found that the subjects largely disregarded causes that were not explicitly mentioned. For example, when the list contained six specific causes and the residual category, the estimated frequency of each possible cause was considerably smaller than when the list contained only three of the specified causes in the unpruned list, and the residual category. The assessed likelihood of “all other problems” in the pruned list was much lower than the sum of frequencies of the removed causes plus the residual category in the unpruned list. In addition, they found that the assessed frequency of any given cause was higher once its constituent components were presented separately.
Both these phenomena may be explained by the availability heuristic: causes that had not been mentioned were less likely to come to the subjects’ mind; hence they overestimated the frequency of the causes that had been brought to their attention. The former phenomenon also reflects what Kahneman has dubbed what you see is all there is (WYSIATI).108 The second phenomenon, which was further examined in subsequent studies, is known as subadditivity.109
Subadditivity—the tendency to judge the probability of an event as smaller than the sum of probabilities of mutually exclusive and collectively exhaustive sub-events—is particularly troubling when the sum of the probabilities of the sub-events exceeds 1 (100 percent). In one study, subjects were asked to estimate the percentage of U.S. married couples with a given number of children—the number being the last digit of each subject’s telephone number, that is, 0 through 9.110 The sum of the means assigned by each group was 1.99, and the sum of the medians was 1.8. The sum of the mean probabilities for 0, 1, 2, and 3 children was 1.45.
Several studies have shown that subadditivity does not characterize complementary binary events, such as when subjects are asked to estimate either the percentage of U.S. married couples with “less than 3 children,” or with “3 or more children.”111 Possibly, (p.38) this is because in binary complementarity, people estimate the probability of an event relative to its complement. However, some studies report subadditivity in binary events as well,112 and there is also evidence of superadditivity (sum of probabilities of complementary events smaller than 1) in such assessments under particular conditions.113
8. Hindsight Bias
At times, people are asked to assess the ex-ante probability of events in hindsight. In such instances, the available information—the fact that the outcome did in fact occur—could cause people to mis-assess the probability of the event taking place. More specifically, people may overestimate the initial probability of an event once they are aware that the event has occurred.
This hindsight bias (and its close relative, the “I knew it all along” bias) is one of the first phenomena to be systematically documented in the JDM literature. The initial contribution to the study of this bias was presented by Baruch Fischhoff.114 In a classic study, Fischhoff asked his subjects to read a detailed description of the historical background leading to the nineteenth-century British-Gurka war, and then estimate the likelihood of four different potential outcomes of the event. Unbeknownst to the participants, they were randomly assigned to either a foresight or a hindsight condition. Participants in the foresight group were given no outcome information. Participants in the hindsight groups were informed that one of the potential outcomes was the “true” outcome of the event. The results of the experiment showed that subjects were unable to ignore outcome information. Once they were told that a certain outcome occurred, they tended to view it as significantly more likely to have happened.
The basic finding of Fischhoff’s experiment has been replicated in dozens of studies.115 These studies employed both the between-subject design described above, and a within-subject design focusing on peoples’ assessment of the probability of an event before and after it occurred.116 In addition, researchers have examined the effect of numerous variables such as subjects’ level of expertise and age on the size of the bias.117 Applied studies have (p.39) documented the existence of the hindsight bias in specific areas such as auditing, medicine, and the law.118 As one review concluded, “results from many experiments converge on the conclusion that outcome feedback sharply inhibits thinking about alternatives to the reported outcome.”119
While the existence of the hindsight bias is undisputed, researchers have highlighted distinct underlying processes that might explain it. One group of explanations focuses on the cognitive aspects of the bias.120 According to this perspective, people search their memories for old beliefs that are confirmed by the outcome information. Another cluster of explanations focuses on motivational aspects of the bias.121 Since people want to perceive themselves in a favorable light, and the ability to predict events precisely is praiseworthy, they tend to overstate their ability to do so.
The hindsight bias exhibits strong resilience in the face of different debiasing efforts, such as adding incentives for accuracy, and drawing participants’ attention to it.122 The general picture emerging from a meta-analysis of this point is that “manipulations to reduce hindsight bias did not result in significantly smaller effect sizes . . . than studies in which no manipulations to increase or reduce hindsight bias were included.”123 Nonetheless, one debiasing technique that has proven relatively effective is the consider-the-opposite strategy—namely, encouraging people to actively think about counterfactual scenarios that do not involve the outcome that had materialized. As further described below, Hal Arkes and his colleagues have demonstrated the effectiveness of the strategy in the context of evaluating medical decisions.124
9. Ambiguity Aversion
Thus far we have examined how people estimate probabilities and how they draw inferences from known probabilities. We now turn to yet another dimension of people’s attitude to risk and uncertainty. The distinction between risk (or measurable uncertainty)—a situation in which outcomes are not certain, but the probabilities of the possible outcomes are known—and uncertainty (or unmeasurable uncertainty)—a situation in which not only the outcomes, but also their probabilities, are unknown—was introduced by Frank (p.40) Knight in 1921.125 Forty years later, Daniel Ellsberg demonstrated people’s aversion to uncertainty—commonly dubbed the ambiguity aversion—through two famous thought experiments demonstrating what is now known as the Ellsberg paradox.126
Imagine there are two urns, each containing red and black balls, from which a single ball is drawn at random. It is known that in one of them there are exactly 50 red balls and 50 black ones—while the other also contains 100 balls, but at an unknown ratio of red and black balls. If you draw a red ball, you win a prize. Which urn would you prefer to draw from? Most people prefer to draw a ball from the first urn—the one with the known probabilities. Arguably, people express this preference out of suspicion that the person in charge of the game has placed a small number of red balls (or perhaps none at all) in the second urn, to minimize or avoid having to award the prize. Interestingly, however, most people persist in preferring to draw from the urn with the known probabilities, even if, immediately after making their first choice, they are offered a similar prize if they draw a black ball from one of the two urns. Thus, it cannot be said that people prefer the first urn because they suspect that there are fewer red—or black—balls in the other urn.
Ambiguity may have various sources. It may, for example, result from missing information about the credibility of one’s sources of information, or from a narrow evidentiary basis for determining the distribution of possible outcomes. There is some evidence that people’s ambiguity aversion depends on its source. Specifically, it has been shown that insurance professionals charged a higher “ambiguity premium” when ambiguity resulted from disagreement among experts about the probability of certain risks, than from other sources (thus supporting a conflict aversion hypothesis).127 Ambiguity may also come in varying degrees. Thus, rather than choosing between an urn with 50 red balls and 50 black ones, and an urn in which there may be any number of red balls from 0 to 100, the number of red balls in the latter urn may be somewhere between 20 and 80, or between 40 and 60, etc. More generally, one may lack information about the distribution of possible outcomes, but know the exact distribution of conceivable distributions of the outcomes—or may lack information even about the distribution of possible distributions.128
Numerous experimental studies have confirmed that people tend to be ambiguity-averse, and are willing to pay considerable sums of money to avoid ambiguity. Increasing the range of probabilities increases ambiguity aversion. Some studies have found that ambiguity aversion is weaker, or even eliminated or reversed, with regard to losses. Indeed, there is some evidence of a fourfold pattern of ambiguity attitudes, with ambiguity aversion for high‐likelihood and ambiguity seeking for low‐likelihood gain events, and the opposite (p.41) pattern for losses. Some studies found slightly weaker ambiguity aversion for small payoffs than for large ones. Conflicting results have been obtained in studies of the correlation between individuals’ risk aversion and their attitude to ambiguity.129 It was found that observation by peers increases ambiguity aversion, but that group decision-making—especially when groups consist of both ambiguity-neutral members and members who are either ambiguity-averse or ambiguity-seeking—increases ambiguity neutrality.130 The above findings were generally, albeit not invariably, replicated in studies of particular activities, outside of the laboratory, such as buying and selling insurance, and marketing.131
While risk aversion is conventionally explained by the diminishing marginal utility of resources, there is no obvious explanation for ambiguity aversion. One explanation views ambiguity aversion as an overgeneralization of the rule that it is preferable to avoid decisions where there is insufficient information—especially when this information is known to exist or may become available in the future.132 A complementary explanation, offered by Chip Heath and Amos Tversky, draws on the finding that people are less ambiguity-averse the more they feel knowledgeable about the relevant issue.133 According to this explanation, people consider not only the expected payoffs of a bet, but also the credit or blame associated with the outcome. Since they value the expected credit for a successful decision within their area of expertise—while believing that a failure might be attributed to chance—they are less reluctant to make decisions in ambiguous environments within their area of expertise.134
Ambiguity aversion poses a challenge to expected utility theory (as well as to Leonard Savage’s subjective expected utility theory)135 as a descriptive theory of human decision-making, because in the real world, but for games of chance, the exact probabilities of uncertain events are rarely known with much precision. It does not follow, however, that ambiguity aversion is irrational.136 For example, when a decision-maker faces more than one possible distribution of probabilities, she may use the maximin choice rule, and choose the option that maximizes the minimum expected utility over those distributions. In Ellsberg’s urn example discussed above, the decision-maker would thus opt for the first (p.42) urn, because it guarantees a 50 percent probability of winning, whereas in the other urn the probability may be much higher (up to 100 percent), but also much lower (including 0 percent).137 Various other suggestions have been made to formally model ambiguity aversion—some of which fit the available data on this phenomenon more than others.138 The finding that ambiguity aversion is largely immune to debiasing by explanations139 lends support for the view that it is not akin to an arithmetic or logical error. Be that as it may, a descriptive theory of human decision-making should take this phenomenon into account.
C. Prospect Theory and Related Issues
In 1979, Kahneman and Tversky proposed prospect theory as a descriptive theory of people’s decisions under risk.140 Almost forty years later, it is still the most ambitious and influential behavioral theory. This section describes prospect theory in general (Subsection 1), as well as key elements of it whose significance extends beyond prospect theory: the role of emotions (Subsection 2), reference-dependence (Subsection 3), and framing effects (Subsection 4). The section then describes several phenomena that have been associated with elements of prospect theory: status quo and omission biases, the endowment effect, and sunk costs and escalation of commitment (Subsections 5, 6, and 7, respectively).
Prospect theory consists of several elements, all of which deviate from the tenets of expected utility theory. Most important, prospect theory posits that people ordinarily perceive outcomes as gains and losses, rather than as final states of wealth or welfare. Gains and losses are defined in relation to some reference point. The value function is normally concave for gains (implying risk aversion) and convex for losses (reflecting risk-seeking). Thus, most people would prefer to receive $100 than take part in a gamble in which they are equally likely to receive either $200 or nothing. However, most people would prefer participating in a gamble in which they are equally likely to lose $200 or nothing, over paying a sum of $100 with certainty. To put it in another way, the concavity of the value function in the domain of gains and its convexity in the domain of losses reflect diminishing sensitivity: the further away a certain gain or a loss is from the reference point, the smaller its effect on one’s utility.141
(p.43) According to prospect theory, not only does the attitude to risk-taking differ between the domain of gains and the domain of losses, but the value function is also generally steeper for losses than for gains. This means that the disutility generated by a loss is greater than the utility produced by a similar gain. Since losses loom larger than gains, people are generally loss-averse. The subjective value function therefore has a “kink” at the reference point. Tversky and Kahneman estimated that monetary losses loom larger than gains by a factor of 2.25.142 A meta-analysis of forty-five studies of the related phenomenon of endowment effect (discussed below) found that the median ratio between people’s willingness to pay (WTP) for an item they don’t yet have and their willingness to accept (WTA) to part with a similar item is 1:2.6 (mean 1:7.17).143 A subsequent meta-analysis of 164 experiments of the endowment effect found that the median ratio between WTP and WTA is 1:2.9 (with very substantial variation).144
Prospect theory also posits that people’s risk aversion in the domain of gains, and risk-seeking in the domain of losses, are reversed for low-probability gains and losses.145 Were it not for this reversal, prospect theory would be incompatible with the fact that many people buy insurance against low-probability risks, and lottery tickets. Finally, prospect theory postulates that the subjective weighing of probabilities systematically deviates from the objective probabilities, exhibiting the certainty effect discussed above.146 The key elements of prospect theory—what Kahneman hailed as “the core idea of prospect theory”—are, however, reference-dependence (the notion that “the value function is kinked at the reference point”) and loss aversion.147
Prospect theory has proven useful in explaining various real-world phenomena, such as the so-called equity premium puzzle,148 the prevalence of contingent-fee arrangements among plaintiffs and its rarity among defendants,149 and more.150 To use the first example, the demand for Treasury bills and other bonds, whose long-term returns are much smaller than that of stocks, is incompatible with standard notions of risk aversion. However, it is perfectly compatible with the notion of loss aversion, assuming investors evaluate their (p.44) portfolios on an annual basis and are willing to forgo considerable expected gains to avoid even a small risk of loss.
Prospect theory has been the subject of considerable criticism. Some studies have challenged the experimental findings underlying the theory, and others have questioned the generality of the notions of reference-dependence and loss aversion. Other studies have offered alternative explanations for the main features of prospect theory.151 However, the overall picture emerging from hundreds of studies is clear: people’s preferences, choices, and judgments do generally depend on the perceived reference point, and exhibit loss aversion and diminishing sensitivity to marginal gains and losses. In what follows, we focus on loss aversion, which carries the broadest implications for legal analysis.152
2. Loss Aversion and Emotions
Many studies have pointed to the existence of a negativity bias—namely the phenomenon whereby negative experiences have a greater impact on individuals than positive ones.153 For example, negative social interactions affect people’s well-being to a greater extent than positive ones.154 Studies of physiological arousal—as measured by autonomic activation indicators, such as pupil dilation and increased heart rate—similarly demonstrated that negative events or outcomes yield greater arousal than positive ones.155
Gains and losses are closely connected to emotions of pleasure and pain.156 In fact, neurological studies using fMRI have demonstrated that decision-making in general, and decisions characterized by loss aversion in particular, involve regions in the brain, such as the amygdala, which are known to be associated with emotions.157 Similarly, it has been (p.45) found that amygdala damage eliminates monetary loss aversion,158 and that deficient ability to process emotional information is correlated with reduced loss aversion in both risky and riskless decisions.159
Prospect theory posits that the benchmark with reference to which people perceive outcomes as gains or losses depends on how they frame the scenario or the choice facing them. Ordinarily, people take the status quo as the reference point, and view changes from this point as either gains or losses. It has been plausibly argued, however, that this assumption pertains only, or primarily, when people expect the status quo to be maintained. When expectations differ from the status quo, using those expectations as the reference point may yield better explanations and predictions of people’s behavior.160 People’s perception of the reference point is also influenced by the status of other people. For example, when an employee receives a smaller salary raise than everyone else in a workplace, she may view it as a loss—even though it has improved her position in absolute terms.161
A person’s reference point may change in dynamic situations. Most research suggests that people quickly adjust their reference point after making gains (in relation to their initial position), but are much more reluctant to do so after incurring losses.162 In the long run, people’s subjective feeling adapts even to extreme changes, such as winning large sums of money in a lottery, or losing a limb in an accident.163 A considerable body of research has studied situations where there is more than one plausible reference point. Basically, in such cases, people appear not to compare outcomes with a single reference point that is a weighted composite of the competing ones. In some cases there is a single dominant reference point; in others, people learn that it is possible to view the same outcome as either a gain or a loss, and their decisions may be affected by the relative strength of each framing.164
(p.46) People sometimes consciously create a reference point by setting a goal for themselves. Perceiving one’s goal as the reference point is instrumental to achieving it. Once a goal is set, it divides outcomes into regions of success and failure. Since outcomes that are worse than the reference point yield a greater hedonic impact, the mere adoption of a goal provides a stronger motivation to attain it.165 Prospect theory provides an explanation for another well-documented finding of the psychological goal literature: the fact that people make a greater effort to achieve a goal the closer they are to doing so. This phenomenon is compatible with the convexity of the value function for losses.166
Reference-dependence is not unique to judgments and decision-making in risky and riskless environments. This phenomenon and related issues are further discussed below.167
4. Framing Effects
A key notion associated with prospect theory, but whose potential implications go far beyond this theory, is framing of decisions, or the framing effect. In their seminal study that introduced this effect, Tversky and Kahneman presented subjects with one of two problems.168 Problem 1 read as follows:
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows:
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved.
Which of the two programs would you favor?
In Problem 2, the outcomes of the alternative programs were described as follows:
If Program C is adopted, 400 people will die.
If Program D is adopted, there is 1/3 probability that nobody will die and 2/3 probability that 600 people will die.
Which of the two programs would you favor?
(p.47) Evidently, the only difference between the two problems was that in Problem 1 the outcomes were framed as possible gains (survival), while in Problem 2 as possible losses (death). Consistent with prospect theory, 72 percent of the respondents in Problem 1 opted for the less risky Program (A), while in Problem 2, 78 percent favored the riskier Program (D).
The Asian disease problem is but one among several paradigms in the vast literature on framing effects.169 It exemplifies what Irwin Levin and his coauthors have labeled risky choice framing—namely, the effect of different descriptions of the same outcomes on people’s risk attitude.170 Another type of framing effect is goal framing.171 While in risky choice framing, different frames may induce opposite choices, in goal framing the various frames are all aimed at promoting a single behavior or end result. To that end, people’s attention is drawn either to the expected benefits of the pertinent behavior/ result (a positive framing) or to the expected costs that it would avoid (a negative framing). For example, to promote breast self-examination, women may be presented with information highlighting the positive consequences of conducting the examination, or the negative consequences of not conducting it. Consistent with loss aversion, some studies have demonstrated that negative framings are more effective than positive ones; but other studies found no such effect.172
Finally, the simplest paradigm is attribute framing. Unlike the previous two, it does not involve a choice between two options, or even an attempt to induce a single behavior, but simply an assessment of the quality or desirability of an object.173 For example, in one study, subjects rated ground beef as better tasting and less greasy when it was labeled “75% lean” than when it was labeled “25% fat.”174
In a meta-analysis of 136 published studies from which 230 effect sizes were calculated, Anton Kühberger has found that the framing effect does exist, but its magnitude is small to moderate.175 As he concludes, “[d]iverse operational, methodical and task-specific features (p.48) make the body of data heterogeneous to a degree that makes it impossible to speak of ‘the framing effect.’ Framing appears in different clothes, some effective in producing an effect and some ineffective.”176 Other reviews of the literature have reached similar conclusions.177
The picture does not become clearer when turning from the laboratory to the real world. Some studies—particularly those dealing with default arrangements in specific contexts—point to robust framing effects.178 The prevalent use of various kinds of framing techniques in marketing similarly indicates that marketers believe in the effectiveness of these techniques.179 At the same time, some studies have found no framing effects in the real world;180 and it is difficult to assess the robustness and generality of the effect outside of the laboratory, since in the laboratory decisions are often made in isolation from social contact and context.181 It appears that “[m]any simple choice problems are so well-structured—experimentally or naturally—that the reference point is for all practical purposes determined by the situation.”182
Having discussed the main features of prospect theory in general, we now turn to three more specific phenomena that have been associated with it: the status quo and omission biases, the endowment effect, and sunk costs (also known as escalation of commitment).
5. Status Quo and Omission Biases
The status quo bias refers to the phenomenon that, other things being equal, people tend to stick to the state of affairs they perceive as the status quo rather than opting for an alternative one.183 Usually, changing the status quo requires an action, while maintaining the status quo involves a mere omission. Hence, the tendency to keep the status quo and the tendency to prefer omission to commission (commonly dubbed the omission bias) are confounded. However, there is experimental evidence that these biases also exist separately, and that their effects are additive.184 When the two biases pull in opposite directions—as when inaction is expected to result in a change, while maintaining the status quo requires (p.49) an action—there is evidence that subjects prefer inaction.185 Putting aside such exceptional cases, we shall mostly discuss the status quo and omission biases together.
To illustrate, in one experiment, William Samuelson and Richard Zeckhouser asked subjects to imagine that they had inherited a large sum of money and had to choose between several investment options. In the neutral version, all options were presented on an equal footing. In the status quo version, subjects were asked to imagine that they had inherited a portfolio of cash and securities and had to decide whether to retain this portfolio or switch to an alternative one. No matter which option was presented as the status quo, the probability that it would be chosen increased significantly compared with the alternative options and with the probability that it would be chosen in the neutral version. The more options included in the choice set, the stronger was the relative bias in favor of the status quo.186 The status quo/omission bias has been experimentally demonstrated in other hypothetical choice tasks, and with different types of subjects.
Several natural experiments have also provided strong empirical support for the status quo/omission bias. For example, the rate of employee participation in a retirement savings plan at a large U.S. corporation was studied before and after a change in the default.187 Before the change, employees were required to affirmatively elect participation. After the change, new employees were automatically enrolled in the plan unless they opted out of it. The change of default resulted in a dramatic increase in retirement plan participation. Comparable data exists in relation to postmortem organ donations. Within the European Union, in some countries people are organ donors unless they register not to be, whereas in others no one is an organ donor without registering to be one. The donation rate in most presumed-consent countries is close to 100 percent, while in the explicit-consent countries it ranges from 4 percent to 27 percent.188 Experimental studies have indicated that this difference is most plausibly a product of the status quo/omission bias.189
A common explanation for the status quo/omission bias is loss aversion.190 When departing from the status quo involves both advantages and disadvantages, people are inclined to avoid such a departure, because the disadvantages likely loom larger than the advantages. For the same reason, when there is uncertainty about whether departing from the status quo would result in gains or losses, people are inclined to avoid such a departure.
(p.50) However, loss aversion is not the only explanation for the tendency to maintain the status quo and to prefer inaction to action. To begin with, when people have no clear preference between the status quo and an alternative option, and departing from the status quo entails transaction or decision costs, doing nothing (hence keeping the status quo) seems sensible. However—unless the choice is between near-identical options—this explanation is problematic when decision and transaction costs are trivial. The incompleteness of this explanation has been further demonstrated by carefully designed experiments, in which subjects were not asked to choose between policies but merely to rate them (thus avoiding decision costs). The results showed that merely labeling a policy as the status quo enhanced its likeability by providing a biased viewpoint from which its relative pros and cons were evaluated.191
The very fact that a certain state of the world already exists may cause people to favor it: “what is, is good.”192 People tend to rationalize and legitimize the existing state of affairs.193 However, these explanations were directly ruled out in the context of rating competing policies, where subjects rated the policies described as the status quo more highly than alternative ones, even though they did not believe that the very fact that a policy is in force attests to its merit.194
People are also viewed as bearing a greater moral responsibility for harmful outcomes they actively brought about than for those they brought about passively.195 Inaction thus entails less responsibility taking. Consequently, people will sometimes prefer harmful omissions to less harmful commissions.196
6. Endowment Effect
(a) Significance and Scope
The endowment effect (also known as the WTA-WTP disparity) is the phenomenon whereby individuals tend to place a higher value on objects and entitlements they already have, compared with objects and entitlements they do not.197 The maximum amount people are (p.51) willing to pay (WTP) for a certain good or entitlement they do not yet have is often lower than the minimal amount they would be willing to accept (WTA) for relinquishing it if they already owned it. This WTA-WTP disparity runs counter to a fundamental independence assumption of standard economic theory, namely that “the value of an entitlement to an individual is independent of the relationship between the individual and the entitlement in the current state of the world.”198 Hence, it contradicts the notion that in a world of zero transaction costs and no limitations on trade, the initial allocation of legal entitlements would not determine their final allocation.199 Unsurprisingly, the endowment effect has been one of the most extensively studied phenomena in behavioral and experimental economics.
Evidence of the gap between the maximal amount of money people are willing to pay for an entitlement and the minimal sum they are willing to accept to give up a similar entitlement was offered as early as the 1960s and 1970s.200 However, only in 1980 was the notion of the endowment effect and its association with loss aversion put forward by Richard Thaler.201 Since then, the endowment effect has been confirmed in numerous experimental studies. These studies have usually taken one of two forms. In one form, subjects were asked how much money they would be willing to pay for a certain item or an entitlement, and how much they would require to part with a similar item or entitlement, thus establishing the WTA-WTP disparity.202 In the other form, subjects were given various items and the opportunity to trade them for other items. These experiments found a trading anomaly in the form of a reluctance to trade a received item for an alternative one, whatever the received item was.203
Closely related to the status quo and omission biases, the endowment effect has been found to apply not only to tangible goods, but to intangible entitlements as well—such as working hours,204 exposure to health risks,205 and contractual rights under default rules.206
(p.52) Various factors determine the existence and scope of the endowment effect. Thus, the reluctance to trade decreases dramatically when the two items are identical and owners are offered a minor incentive to trade.207 As the similarity between the items increases, the reluctance to trade decreases.208 The more difficult it is for people to compare the endowment item and the proposed alternative, the greater the reluctance to trade.209
Money does not create an endowment effect.210 More generally, there is no endowment effect when goods are held for exchange (such as commercial stock), rather than for use.211 However, the endowment effect does apply to financial instruments or bargaining chips whose value is uncertain.212
Another factor bearing on the strength of the endowment effect is the source of the object. Thus, subjects who believed that they received an object as a prize for their performance on a classroom exercise displayed a stronger endowment effect than those who believed that they obtained the object by chance.213 Similarly, a stronger endowment effect was found with regard to objects received as gifts from a close friend.214 Finally, creators of artistic goods (paintings) showed a particularly strong endowment effect regarding their creations.215
There is some evidence that trading experience reduces or even eliminates the endowment effect.216 However, it appears that experience does not eliminate the WTA-WTP disparity with regard to non-market items that are difficult to compare.217 Plausibly, then, the elimination of the endowment effect stems not from trading experience, but from the (p.53) fact that traders are holding goods for exchange—where, as previously indicated, the effect does not apply.
(b) Causes and Explanations
While the existence of the endowment effect is rarely denied,218 the causes and explanations for the WTA-WTP disparity are hotly debated.219 One primary explanation is loss aversion. “[R]emoving a good from the endowment creates a loss while adding the same good (to an endowment without it) generates a gain.”220 Since money is valued for its exchange value, it does not create an endowment effect; hence the seller’s perceived loss due to parting with the object is not paralleled by the buyer’s loss due to parting with her money.
Another explanation for the endowment effect draws on the notion that owning an object creates an association between the item and one’s self. When owning an object becomes part of one’s self-definition, a self-serving bias (i.e., people’s desire to see themselves in a favorable light)221 likely results in an increased valuation of the object—the so-called mere ownership effect.222 This hypothesis has been corroborated by experiments in which the WTP of buyers who happened to own an item identical to the one they were offered was equivalent to the sellers’ WTA.223 It also falls in line with findings that there is no endowment effect in relation to money and other goods held for exchange, and that the endowment effect is stronger for goods received as a reward for a successful performance. (p.54) It is, however, difficult to see how this explanation can apply, for instance, to people’s reluctance to trade lottery tickets or bargaining chips whose value is uncertain.
Yet another psychological explanation rests on biased information uptake and processing: owning an object increases the accessibility of, and attention to, information that supports keeping the object, whereas a decision whether to acquire an object increases the accessibility of, and attention to, information that supports keeping one’s money or receiving money rather than the object.224 Like the self-definition explanation, this explanation hardly applies to lottery tickets, but perhaps people’s reluctance to trade lottery tickets is not a manifestation of the endowment effect, but rather of expected regret.225
Others have argued that standard economic theory can explain at least some manifestations of the WTA-WTP disparity without resorting to psychological phenomena. According to one such explanation—based on the decreasing marginal utility of wealth—the very fact that a person does not have a certain asset makes her poorer. She therefore values each dollar more than the endowed person does, and thus her WTP is smaller than the owner’s WTA. However, this explanation is irrelevant to most manifestations of the endowment effect. It is neither relevant to objects that constitute a minuscule portion of people’s wealth, nor to experiments in which potential buyers receive a sum of money equivalent to the value of the object given to potential sellers.
Another attempt to square the endowment effect with standard economic theory is the argument that the effect results from an inappropriate application of the normal bargaining strategy of “Buy low, sell high.” However, this argument does not account for the findings of experiments where subjects choose between keeping an object and trading it for another object without stating their WTA or WTP values. It is also difficult to accept when experiments are designed so that the subject’s dominant strategy is to reveal their true WTP and WTA.226
Yet another economic explanation relies on the income and substitution effects. In the absence of substitutes for an entitlement or a good, WTA may be infinite, whereas WTP will always be capped by one’s income, thus resulting in a large WTA-WTP disparity. The more an entitlement or an object is perceived as being unique, the greater the expected WTA.227 However, while this explanation applies to unique and valuable goods, it can hardly explain the endowment effect with regard to ordinary, inexpensive goods such as coffee mugs, and even less so the trading anomaly over lottery tickets, which are perfect substitutes.228
(p.55) Additional possible causes of trading anomalies include owners’ attachment to goods;229 individuals’ disinclination to sell common items (which they usually only buy);230 the herd effect (when subjects publicly signal their willingness/unwillingness to trade by raising/not raising their hand);231 transaction and decision costs; lack of information about the value of goods,232 and their market price;233 subjects’ misinterpretation of their receipt of an item from the experimenter as an indication of its high value or the desirability of keeping it;234 and misconceptions about the elicitation-of-value procedure.235
While all these factors may indeed contribute to the WTA-WTP disparity, the overall picture emerging from dozens of studies is that none of them—either individually or in any combination—fully explains the disparity. Thus, for example, while valuation likely increases with the duration of ownership, some experiments have found “an instant endowment effect.”236 The WTA-WTP disparity was found even when decision and transaction costs were practically the same for trading and not trading.237 Likewise, while the herd effect may explain extreme reluctance to trade when subjects were asked to express their willingness to trade by raising their hand, it cannot account for the reluctance they exhibited when asked to mark their choice on a form.238 Similarly, the endowment effect has been found even when experiments were conducted in a manner that eliminated any possible inference from the fact that subjects received one object rather than the other.239 The same is true for measures taken to eliminate misconceptions of the elicitation-of-value procedure.240
When controlling for such variables, it appears that the main explanation for the absence of an endowment effect in some experimental designs (which were designed to question its very existence), is that these designs considerably weakened the perception (p.56) of reference states.241 There is no endowment effect without a sense of endowment. These designs were also very different from the conditions outside the laboratory, where the reference state is usually quite clear and the endowment effect is equally apparent.242 Ultimately, loss aversion remains one of the central explanations for the endowment effect.
7. Sunk Costs and Escalation of Commitment
Expected utility theory posits that when choosing between different courses of action, only future costs and benefits should be taken into account. Unrecoverable, incurred costs that would not affect future costs or benefits should not affect decisions, as the past cannot be changed. For instance, a ticket holder should decide whether to go to a concert according to the expected net benefit of doing so, irrespective of how much she has paid for the ticket, if at all. However, numerous laboratory and field experiments, as well as empirical studies, have shown that very often, people do not disregard such sunk costs in their decisions. Rather, they tend to persist in endeavors the more resources, time, or efforts they have already invested in them. Thus, in a randomized field experiment, people who paid more for theater season tickets attended more plays than those who paid less.243 Similarly, after having purchased two differently priced items, and forced to choose only one of them to consume, most people choose to consume the more expensive item, even if they might otherwise have preferred the other one, or had no particular preference.244 In the same vein, entrepreneurs keep investing in failed projects,245 and countries persist in fighting hopeless wars.246 Such escalation of commitment characterizes decisions made by laypeople and professional decision-makers alike.247
Various determinants—economic, organizational, social, and psychological—influence escalation of commitment.248 If we focus on psychological determinants, two primary explanations have been offered for the escalation-of-commitment phenomenon: self-justification, and avoidance of sure losses. According to the first explanation, (p.57) people are unwilling to admit to themselves and to others that their initial decision has proven wrong and wasteful.249 Self-justification is related to the confirmation bias, that is, the tendency to gather and process information in a manner that conforms to one’s prior commitments.250 According to the other explanation, escalation of commitment stems from people’s aversion to sure losses. To avoid sure losses, people tend to keep investing in failing projects even if the prospects of turning them into successful (or break-even) ones are slim.251 In one of the early escalation experiments, Barry Staw found that participants allocated significantly more research and development money to failing corporate divisions than to successful ones.252 In accordance with prospect theory, sure losses are overvalued (the certainty effect), and people are risk-seeking in the domain of losses.253
The two explanations are not mutually exclusive, and they both play important roles. While escalation of commitment is stronger when the decision-maker is responsible for the initial investment, it is also evident when the initial decision was made by someone else.254 Sunk costs affect people’s decisions even when they do not feel responsible for a wrong decision, as in the case of choosing which of two products to consume, when one costs more than the other. It has also been suggested that the two explanations are interrelated: people who are in greater need for self-justification are less likely to adjust their reference point after the failure of the initial investment, and are therefore more susceptible to escalation of commitment.255
Interestingly, the different attitude to losses and gains possibly explains not only deviations from expected utility theory by overinvesting in failed projects, but also by underinvesting in projects whose costs exceed their initial mental budget—the so-called de-escalation of commitment. When people set a mental budget to control their resource expenditures, they may stop investing in an endeavor when additional expenditures would exceed this budget, even if the expected benefit from such an investment is larger than the incremental cost.256
Several interrelated biases revolve around the role of motivation—especially self-serving motivation—in people’s perceptions, judgments, and choices. Beginning with Leon Festinger’s influential theory of cognitive dissonance in the 1950s,257 and continuing with Peter Wason’s seminal studies of the confirmation bias in the 1960s,258 a very large body of research has dealt with these phenomena.259 However, while it was stated that the confirmation bias “has probably attracted the most enduring interest of all cognitive biases,”260 it was also noted that “the proof of this bias remains elusive”261 (and it may be the case that the former statement reflects the self-serving bias of those who study the confirmation bias). This section highlights the main findings (and controversies) regarding motivated reasoning and confirmation bias, as well as several related phenomena, such as overoptimism, naïve realism, and illusion of control. It also discusses studies of behavioral ethics, which predicate unethical behavior on self-serving biases.
2. Motivated Reasoning and Confirmation Bias
While people are sometimes motivated to arrive at an accurate conclusion, whatever that may be, at times they aim to reach a particular, directional conclusion—often the one that best serves their interests. When people truly strive to reach an accurate conclusion, they use the most appropriate strategy to attain that goal. In contrast, directional goals prompt people to use strategies that are likely to yield the desired conclusion. Interestingly, directional processing of information can be as detailed and thorough as accuracy-motivated processing. Information processing can be thorough and biased at the same time.262 Motivated reasoning is evident in both pre-decisional acquisition and processing of information, and in post-hoc justification of one’s decisions.263 While people may knowingly and purposively acquire and process information in a way that confirms their prior views, expectations, and decisions, our focus is on the less conscious and mostly automatic processes—namely, (p.59) on System 1 thinking, which is then backed up by System 2 reasoning.264 This interplay between System 1 and System 2 may explain why motivated decision-makers tend to bias their judgments only to the extent necessary to corroborate their judgment, subject to a reasonableness constraint.265
One key manifestation of motivated reasoning is the confirmation bias. Confirmation bias (also known as the myside bias) denotes the tendency to seek and process information in ways that are partial to one’s interests, beliefs, and expectations.266 In an early study, Wason presented participants with triplets of numbers (e.g., 2-4-6), and asked them to infer the rule used to generate them (e.g., numbers increasing by 2, or increasing even numbers). The participants then tested their hypothesis by suggesting other triplets and being told whether they were consistent with the rule. To test their hypothesis, participants should have logically suggested triplets that did not conform with the hypothesis. In fact, they tended to suggest triplets that conformed with it.267 It has been found that people learn from experience to use better strategies for testing their hypotheses, but that time pressure exacerbates the bias.268 The tendency to look for confirmatory evidence may be stronger in some laboratory experiments involving abstract and unfamiliar tasks than in familiar, daily ones, but it characterizes the latter as well.269
People not only look for confirmatory evidence, they also tend to ignore disproving evidence, or at least give it less weight, and to interpret the available evidence in ways that confirm their prior attitudes. People see in the data what they are looking for, or expect to see.270 When people are presented with arguments that are incompatible with their existing beliefs, they automatically scrutinize them longer, subject them to more critical analyses, and consequently judge them to be weaker than arguments compatible with their own beliefs.271
Biased search, interpretation, and recollection may account for belief perseverance (irrationally sticking to beliefs notwithstanding falsifying evidence),272 the primacy effect (p.60) (attributing greater weight to the first piece of evidence, compared with subsequent ones),273 and attitude polarization (increased disagreement between people who are exposed to the same additional information).274
The confirmation bias is stronger for emotionally charged issues and for deep-seated beliefs.275 It also correlates with some personal traits. For example, it was found that people vary in terms of their defensive confidence, namely their belief in their ability to successfully defend their attitudes against counterarguments. Ironically, it was found that people with greater defensive confidence are less prone to the confirmation bias, because they are more willing to consider antithetical evidence, which sometimes lead them to change their minds (whereas people with low defensive confidence tend to disregard disconfirming information).276 Very little correlation has been found between the tendency to gather and interpret information in a way that would confirm people’s prior beliefs and attitudes, and their intelligence.277 A specific measure of individual-level cognitive openness—the antonym of close-mindedness and susceptibility to the confirmation bias—has been developed recently.278
As is often the case in JDM research, it is one thing to characterize the confirmation bias, and another to assess the extent to which it deviates from a normative model of decision-making. Skepticism toward evidence and arguments that are contradictory to one’s established beliefs is often prudent and rational. Constant questioning of one’s own attitudes is mentally stressful, and practically impossible given our limited cognitive resources. More troubling, Baron has found that college students tended to assess sets of arguments, ostensibly made by other students, as better when the arguments all pointed in one direction than when both sides were presented—even when the final conclusion was contrary to the assessors’ position.279 The confirmation bias may therefore be generally adaptive, yet detrimental in predictable ways.280
The tendency to gather and process information in a confirmatory manner has been invoked to explain various real-life phenomena, from mystical beliefs and witch-hunting, to policymaking, judicial reasoning, and the slow development of medical and scientific knowledge throughout history.281 In fact, many of the controversies in the social (and (p.61) other) sciences, including those between economists committed to rational choice theory and behavioral economists, may be rooted in each camp’s confirmation bias (and there is no reason to assume that the authors of this book are immune to it, either). Plausibly, what makes scientific knowledge more reliable than other forms of knowledge is not each scientist’s open-minded attempts to falsify her own findings, but rather the insistence of science as an institution on falsifiability and the strong motivation of scientists to falsify other scientists’ theories.282
3. Overoptimism and the Better-than-Average Effect
The term overoptimism has been used to describe various psychological phenomena.283 Here we use it to denote instances where people overestimate the prospects of positive or desirable things, or underestimate the prospects of negative or undesirable ones. We therefore exclude from the present discussion optimism as a personality trait, as well as the framing of situations or events as either positive or negative (whether the glass is half empty or half full, so to speak). The present discussion includes the better-than-average effect, but leaves out overconfidence, which will be discussed separately.284
Overoptimism requires a comparison between one’s estimations and an external benchmark. Depending on the circumstances, various benchmarks may be deemed relevant, including actual future outcomes (in the case of predictions), probability value based on the general base rate (e.g., in the case of one’s probability of divorce), and social comparison (i.e., estimates made by other people).285
Overoptimism has been found in various experimental settings. In an early study, subjects were each given a pack of cards, told that it contained marked cards in a certain ratio (e.g., 7 out of 10) and then asked whether or not they expect to draw a marked card. Half of the subjects were told that they would gain a point if they drew a marked card (the desirable condition), and the other half that they would lose a point if they drew a marked card (the undesirable condition). Other subjects participated in the neutral condition—where no points were gained or lost when a marked card was drawn. All subjects were informed of the outcomes of their draws only at the end of the entire procedure. It was found that the stated expectations were highest in the desirable condition and lowest in the undesirable condition, with the neutral condition in-between.286 Similar results were obtained in studies in which the desirable outcomes (from the participants’ perspective) were not manipulated experimentally, but preexisted—as in predictions about (p.62) one’s future health or professional success, election outcomes, or the results of football games.287 In one survey, conducted in the United States, people who had just married, or were about to get married, were asked about the divorce rate in the United States and the likelihood that they personally would divorce. While the median response to the first question was 50 percent, which was the correct answer, the median answer to the second question was 0 percent.288
Many studies have shown that people evaluate themselves more favorably than they evaluate their peers. In one study, about half of the subjects believed that they were among the safest 20 or 30 percent of the drivers, and about 80 percent believed themselves to be safer than the median driver.289 In another study, subjects who scored in the bottom quartile on tests of various intellectual skills believed that they did better than the average participant.290 The magnitude of this better-than-average effect has been found to be greater for controllable traits than for uncontrollable ones,291 and for ambiguously defined than for specific ones.292 The magnitude of the effect also depends on the level of abstraction of the comparison target. People are less biased when they compare themselves with an individuated target—especially someone with whom they have personal contact—than with a non-individuated target, such as the average student.293
It has been demonstrated that people update their estimates once they receive information indicating that their initial estimate was overly pessimistic, much more so than following information showing that the initial estimate was overly optimistic. Such selective updating (resulting from reduced coding of negative information in the brain) allows people to maintain their overoptimism despite the presence of disconfirming evidence.294
(p.63) Relatedly, people tend to attribute positive events to their own internal and stable character, and negative events to external, unstable causes. They make more internal attributions for success than for failure.295
Another manifestation of a self-serving bias—on the borderline between overoptimism and overconfidence, which is particularly germane to legal contexts—refers to people’s assessment of their ability to provide justification through arguments. People tend to overestimate this ability, especially when they are emotionally invested in the issue in question.296
It is important to note that overoptimism and the better-than-average effect do not necessarily stem from motivated reasoning or wishful thinking. Some of the effects described above appear to be byproducts of other, non-motivated cognitive biases. According to the egocentric-thinking account, when comparing themselves to others, people focus on the likelihood that they would experience an event, rather than on the likelihood that the comparison target would. This theory accounts for the finding that sometimes people give higher comparative estimates when the absolute frequency of a negative events is high, but lower comparative estimates, and even overly pessimistic ones, when the frequency is low (whereas wishful thinking would predict overoptimism in both cases).297 It has similarly been shown that when predicting the results of football games, subjects assign higher probability of winning to a team not only when they are promised a money reward if that team wins, but also when a team is made more salient by other means, without the promise of any reward.298 Overoptimism may also be associated with the so-called projection bias, namely people’s tendency to underestimate the extent to which their tastes and preferences might change in the future.299 Projection bias leads to overoptimism when, for example, people underappreciate the effects of the gradual increase in their standard of living on their consumption preferences—which may in turn lead to insufficient saving. Some instances of overoptimism may also be perfectly rational given people’s limited information about others.300 However, motivation-based overoptimism—that is, overoptimism originating in people’s desire to be more skillful (in absolute terms and in comparison with others), to (p.64) have positive experiences, win competitions, and attain higher social status by appearing more optimistic—does exist.301
Presumably, accurate beliefs about reality are key to optimal decision-making. However, the prevalence of overoptimism suggests that it may actually be adaptive. Indeed, a meta-analysis of the available empirical evidence attests to a small overall depressive realism effect: depressed individuals are less prone to overoptimism than non-depressed ones.302 Optimism also contributes to one’s physical health. Since expecting positive outcomes reduces stress and anxiety, and facilitates health-promoting activities, optimists tend to live longer and be healthier. Overoptimism may lead people to work harder, which in turn may facilitate greater achievements. Better-than-average perceptions of one’s spouse and children are also very prevalent and highly adaptive.303
That said, overoptimism has adverse and even dangerous effects, as well. Overly optimistic people are more likely to procrastinate when required to perform an unpleasant task.304 They may refrain from taking necessary precautions, neglect periodic medical examinations, and fail to watch their diet. Similarly, unrealistic optimism about one’s future income may lead to excessive borrowing;305 entrepreneurial wishful thinking may lead to excessive entry into competitive markets;306 and overoptimism about litigation outcomes may hinder mutually-beneficial compromises.307
The term “overconfidence” has been used to denote several phenomena, including the better-than-average effect discussed above.308 In this section we focus mainly on the degree of confidence that people express about the accuracy of their assessments and judgments (also known as overprecision or miscalibration).
A common method of examining people’s confidence is to ask them to answer a list of questions, and then state their degree of confidence in the correctness of their answers to each question. For example, a participant may indicate that she is 10, 20 . . . or 100 percent (p.65) confident in the correctness of any answer. For each level of confidence (say, all the answers that the participant was 70 percent confident about), the percentage of correct answers is then compared with the stated confidence. Such experiments have traditionally used general knowledge questions, word-spelling tasks, and the like.309 Typically, a considerable gap is found between the percentage of questions participants had answered correctly and their stated degree of confidence (which is higher). Another widely used method for examining confidence utilizes the confidence interval paradigm: asking people to make an assessment or prediction within a given interval of confidence, say 90 percent (so that there is only a 10 percent chance that the assessment is wrong), or to estimate the percentile at which they are confident in their estimation.310 Again, it has been found that the correct figure lies outside the stated interval much more often than subjects expect it to be. Several explanations have been offered for overconfidence, but none appears to be general or conclusive.311
Overconfidence has been found to be most pronounced when tasks are very difficult, and to diminish with the ease of the task—possibly due to people’s limited ability to assess the difficulty of various tasks. The finding that overconfidence considerably diminishes with easier tasks raises concerns about the external validity and generalizability of the laboratory findings, since in the natural environment people are arguably better at assessing the difficulty of common tasks.312 Furthermore, the difference between hard and easy tasks raises doubts as to whether revealed overconfidence is at all related to a self-serving bias, as the latter should presumably characterize both hard and easy tasks.313
However, overconfidence has also been found in experiments that appear to be less vulnerable to these critiques, such as one in which subjects were asked to identify contradictions in a text.314 In another experiment, subjects who were provided with a simple decision rule tended to use their own judgment rather than to follow the rule—which led them to do worse. Such overconfidence was even more pronounced among subjects who had (or thought they had) a relevant expertise. Consequently, they not only did worse than they would have done had they followed the decision rule, but even worse than the nonexperts.315
(p.66) There is conflicting evidence about the effect of professional training and expertise on people’s confidence. On the one hand, the assessments by meteorologists of the correctness of their own weather forecasts were found to be fairly accurate—plausibly thanks to the constant feedback they receive.316 On the other hand, other professionals—such as physicians, lawyers, and scientists—were found to be overconfident.317 Overconfidence may affect the choice between discretionary, holistic decision-making and the use of evidence-based guidelines that integrate data based on statistical meta-analyses. Much of the detrimental underuse of such guidelines is attributed to professionals’ overconfidence.318
Encouraging people to consider more information and possible alternatives reduces overconfidence. Providing people with feedback has not, however, produced clear-cut results.319
In general, overconfidence may have beneficial side effects in social interactions, such as negotiation, persuasion, and medical treatment (where a physician’s overconfidence in the expected success of a treatment may enhance the prospect of success, thanks to the placebo effect).320 Hence, it may have evolutionary adaptive advantages. However, such incidental benefits are likely to exacerbate this bias, and overconfidence may well lead people astray—for example, in litigation and settlement decisions.321
5. Naïve Realism and False-Consensus Effect
People perceive and interpret reality differently. In one famous study, undergraduate students from two universities watched a film of a rough football game that actually took place between their universities’ teams. The students were asked to mark any rule violation by each team, and whether those violations were “mild” or “flagrant.” Judging from their answers, one might think that the students saw different games, although for each student the version that he or she saw was very real.322 For example, Stanford students saw twice (p.67) as many infractions made by the Dartmouth team as the Dartmouth students saw, and while almost nine-tenths of the Stanford students thought that the Dartmouth team had instigated the rough play, a majority of Dartmouth students believed that both teams were to blame.323
This football experiment demonstrates one aspect of naïve realism—namely, the human tendency to believe that we see the world around us objectively, while people who disagree with us must be uninformed or biased.324 People assume that they see reality “as it is,” and that their beliefs and attitudes emanate from an unbiased comprehension of the evidence available to them. It follows that other rational people who have access to the same information and process it in open-minded fashion should reach the same conclusions. It further follows that if other people do not share their conclusions, it must be because the former are un- or misinformed, because they are irrational or otherwise unable to consider the data, or because they are biased by self-interest, ideology, or some other distorting influence.325
Naïve realism underpins the false-consensus effect—people’s tendency to overestimate the extent to which their beliefs and opinions are shared by others.326 People also believe that whereas their own beliefs are not indicative of their personal dispositions, conflicting attitudes do reflect the personality of their proponents.327 The more a situation or a choice is open to conflicting interpretations, the greater the false-consensus effect—which points to the role played by people’s subjective interpretation in producing this effect.328
A corollary of naïve realism is that people readily recognize this bias, and a host of other biases, in other people’s perceptions and judgments, but they often have a blind spot regarding their own naïve realism (and other biases).329 Even when people are aware of their own biases, they tend to believe that they are more capable than others of assessing the magnitude and effect of those biases.330
(p.68) Naïve realism is related to the formation and maintenance of in-group and out-group identities. It thus makes the resolution of social, ethnic, and political conflicts extremely difficult.331
6. Fundamental Attribution Error
Understanding why other people behave as they do is as essential in daily life as it is in professional decision-making by educators, managers, and judges. Attribution theories seek to explain this process. They usually distinguish between internal causes of behavior, such as personal traits and dispositions, and external or situational ones, such as social norms and obedience to instructions. In a classic experiment, Edward Jones and Victor Harris asked participants to assess the true attitude of a person who had written an essay that was either supportive or critical of Castro’s regime. Participants were also told either that the author had written the essay voluntarily, or had been instructed to do so by an authority figure. While subjects took the issue of choice into account, one striking finding was that even in the no-choice condition, they tended to believe that the essay reflected the writer’s true attitude.332 Subsequently dubbed the fundamental attribution error333—also known as the correspondence bias—the tendency to attribute other people’s behavior to their personal attitudes and motivations, rather than to environmental influences and constraints, has been documented in numerous studies.334 However, like virtually all phenomena discussed in this chapter, the fundamental attribution error, and the very distinction between dispositional and external causes of behavior, have been the subject of some controversy.335
One explanation for the fundamental attribution error is observers’ lack of awareness of situational constraints. To judge the extent to which a given behavior is a product of inner inclinations or external forces, one must be aware of the latter. Sometimes, however, external forces, such as audience pressure or parental threats, are simply invisible to the onlooker, and the onlooker fails to grasp their true impact on the person in question, for example due to naïve realism.336 Another explanation is unrealistic expectations of (p.69) behavior—expectations that do not give due weight to people’s conformity, that is, to their tendency to adapt their behavior to match group norms.337 Other explanations have been offered as well.338
The fundamental attribution error is affected by various variables. People are more prone to commit it when they have fewer cognitive resources at their disposal to assess the causes of the observed behavior (for example, when they perform an additional cognitive task at the same time).339 It thus appears that correcting the initial perception of people’s behavior by considering external circumstances is a more demanding, deliberative process. Studies have also shown that negative moods decrease, and positive moods increase, the fundamental attribution error.340 Finally, there appear to be cultural differences regarding the inclination to attribute behavior to personal dispositions. Specifically, some studies have shown that East Asians tend to recognize the causal power of situations more than Westerners.341
7. Planning Fallacy
Overly optimistic predictions regarding the time (and costs) involved in completing projects have been repeatedly noted, for example, in the construction and the software engineering industries. This phenomenon, known as the planning fallacy,342 is equally prevalent in daily life. Thus, having published several books with a leading publishing house, one of us has noticed that whenever he submitted a book manuscript by the deadline set out in the publication agreement, it took several months before the production process actually got underway. It occurred to him that the publisher had learned from experience that authors rarely submit manuscripts on time, and adjust their work plans accordingly. Realizing this, in his next publication agreement, the author set an unrealistically early submission date.
Kahneman and Tversky have argued that a sound prediction should rest on two types of information: information about the particular case under consideration (the so-called (p.70) singular information) and information about similar cases, based on past experience, when available (distributional information). They attributed the planning fallacy to excessive focus on the singular information, compared with the distributional one,343 as the latter cautions against overoptimism. Further experimental studies have shown that the comparative neglect of past experience is due to several factors.344 First, the very engagement in a planning activity elicits concentration on the future rather than on the past. It follows that providing incentives for early task completion exacerbates the planning fallacy, since it reinforces the focus on detailed future plans, at the expense of relevant past experiences.345 Similarly, since a sense of power and control induces goal-directed attention (and disregard of other information), it similarly aggravates the planning fallacy.346 A second reason for neglecting past experience is that, as in the case of base-rate neglect in probability assessments,347 people tend to focus on specific, rather than general, information—that is, on the case at hand, rather than past experience. Third, when judging their previous behavior, people tend to attribute success to their own abilities and efforts—and failures to external, supervening events, which may not seem relevant to the current project.348 Unsurprisingly, when subjects are led to recall past experiences and relate them to the task at hand, they make much more realistic predictions.349 It has also been found that while people are overly optimistic about the completion time of their own projects, external observers may be overly pessimistic.350 Finally, overoptimism about task completion is also linked to the desire to be seen in a positive light by others. Thus, in one experiment, people exhibited the planning fallacy when they made predictions verbally to a familiar experimenter, but not when making them anonymously.351
Forecasting an early completion may incentivize people to meet their goals by increasing their effort and persistence, and by inducing consistency motivations.352 It may (p.71) thus be evolutionary adaptive—at least on some occasions. There are, however, grounds to believe that there is a difference in this regard between tasks that can be completed in a single, continuous session (and are therefore less susceptible to interruptions), and those that cannot: overly optimistic predictions foster early completion of the former, but not the latter.353
Besides the automatic and largely unconscious mechanisms that produce the planning fallacy, underestimating the time and costs of future projects may have strategic advantages. For example, once an organization has embarked on a project, it is most likely to keep investing in it (possibly due to the sunk costs effect), even if it turns out that the predicted costs and completion time were overly optimistic. Hence, when decision-makers must choose between competing projects, those who advocate a particular project may strategically underestimate its costs and completion time. Consequently, it may be difficult to disentangle automatic and deliberative causes of the present phenomenon.
8. Illusion of Control
People’s tendency to attribute their successes to themselves, and their failures to external factors,354 is apparently inapplicable to circumstances in which outcomes depend on sheer chance. Nevertheless, it has long been shown experimentally that skill-related factors—such as competition, choice, contemplating successful strategies, active involvement, and familiarity—lead people to believe that they have control over objectively chance-determined events.355 For instance, observers tend to assume, implicitly, that a person who rolls dice himself has greater control over the outcome then when someone else rolls the dice for him.356 In fact, it has been observed that dice players behave as if they were controlling the outcome of the toss: they threw the dice softly when they needed low numbers, and hard for high numbers.357
A meta-analysis of dozens of studies has shown that the illusion of control manifests itself across various tasks and circumstances.358 The illusion’s effect was found to be larger with regard to participants’ belief in their ability to predict outcomes than in relation to their ability to control them. It has also been shown that the magnitude of the effect is significantly greater in experiments that employ an indirect, qualitative measure of the effect (for example, whether participants are willing to trade a lottery ticket) than in those using an indirect, quantitative measure (such as the number of trials in which participants feel (p.72) confident about the outcome) or direct assessments (such as when participants are asked how much control they feel they had over the outcome).359
A common denominator of the studies described thus far is that they have focused on situations where people have very little or no control over outcomes, but nevertheless believe that they do have such control. A more recent study revealed a complementary—and in a sense, opposite—phenomenon: when people have a great deal of control, they tend to underestimate it.360
9. Behavioral Ethics
Egocentrism and motivated reasoning are crucial to understanding people’s ethical behavior—especially the mechanisms that allow ordinary people to violate ethical norms while preserving their self-image as moral people. This topic—commonly known as behavioral ethics—has attracted considerable attention in recent years.361
Behavioral ethics draws heavily on the notion of dual reasoning (System 1 and System 2),362 and argues that self-interested behavior is largely automatic. While JDM research focuses on how people’s heuristics and biases often hinder the advancement of their interests and goals, behavioral-ethics studies show how automatic processes facilitate the promotion of people’s interests and goals. However, unlike standard economic analysis, which posits that people deliberately maximize their own utility, behavioral ethics focuses on the effect of self-interest on people’s automatic cognitive processes.
Motivation—in particular, the motivation to advance one’s self-interest—affects reasoning through the cognitive processes by which people form impressions, determine their beliefs, assess evidence, and make decisions.363 Motivated, self-serving reasoning affects not only the decision process, but also ex-post recollection. Lisa Shu and her colleagues have demonstrated that people misremember both what they have done and what they were told to do, when it allows them to believe that they have acted ethically.364 In their experiments, participants who were given an opportunity to cheat tended to forget the contents of an honor code they had previously read, far more than participants who were not given such an opportunity.
(p.73) Considerable evidence supports the claim that self-interest affects ethical behavior through System 1. Self-interest is “automatic, viscerally compelling, and often unconscious,” whereas compliance with professional obligations is “a more thoughtful process.”365 The automatic nature of self-interest makes it difficult for people to be aware of it; hence they are unlikely to counteract its impact on their reasoning. Thus, in some experiments, subjects were first asked to make estimates on behalf of one party (e.g., a prospective buyer or seller), and then incentivized to make as objective estimates as possible. Not only did the affiliation with one party bias the subjects’ original estimates, but this bias carried over to the subsequent estimate, despite the monetary incentive for accuracy. It appears that subjects actually believed their biased assessments.366 The notion that unethical behavior is automatic (and is sometimes curtailed by self-control) is supported by the finding that time pressure increases self-serving unethical behavior, whereas ample time reduces such behavior (provided that people are unable to come up with justifications for their deeds).367 Finally, a recent study showed that when subjects were experimentally manipulated into an intuitive/automatic mindset, they tended to act in a more self-interested manner than when they were manipulated into an analytical/deliberative mindset.368
While the majority view in the literature emphasizes the role of System 1 in unethical behavior, the picture is more nuanced: System 1 thinking does not always lead to selfish behavior, and people do sometimes deliberately and consciously violate moral and social norms.369 In circumstances of explicit social exchange (such as prisoner’s dilemma and public goods games), cooperation and reciprocity, rather than self-interested defection, appear to be the automatic response.370
A common theme of behavioral-ethics studies—closely related to the notion that unethical behavior is often automatic rather than calculated—is that ordinary, “good people” sometimes do “bad things.”371 People tend to display moral hypocrisy: they are motivated (p.74) “to appear moral in one’s own and other’s eyes while, if possible, avoiding the cost of actually being moral.”372 However, while rational choice theory might predict total disregard of ethical norms, in reality people tend to infringe ethical norms only to the extent that allows them to maintain their self-image as honest people.373 As Nina Mazar and her colleagues have put it, “people behave dishonestly enough to profit but honestly enough to delude themselves of their own integrity. A little bit of dishonesty gives a taste of profit without spoiling a positive self-view.”374
A telling demonstration of this observation is found in an experiment conducted by David Bersoff, in which all subjects were “mistakenly” overpaid for their participation.375 The conspicuousness of the unethicality of not correcting this mistake was varied between subjects, as was the identity of the victim of this behavior (an overseas firm that had financed the experiment, or the experimenter himself), and the extent to which subjects were indirectly induced to deliberate on ethical issues. The more difficult it was made for subjects to ignore the unethicality of keeping the overpayment for themselves, the more they tended to correct the overpayment (the same was true when the victim of cheating was a specific person, the experimenter, rather than a faceless, big foreign firm).376 By the same token, when people are faced with a self-benefitting choice that might potentially harm someone else, they prefer not to know whether such harm would ensue, so as to make that choice in good conscience.377 Other studies similarly support the claim that people tend to cheat only to the extent that they can maintain their self-image as honest people.378
Ann Tenbrunsel and David Messick have argued that people use several devices to avoid recognizing the unethicality of their behavior.379 These include the use of euphemisms (e.g., “creative accounting”), ethical numbing when a morally dubious behavior is repeated,380 and putting the blame on others (either an entire group of people, or one’s superiors). In fact, under certain circumstances, these and comparable mechanisms of moral disengagement (p.75) can lead not only to lying and cheating, but to the perpetration of large-scale atrocities as well.381 Other mechanisms of moral disengagement include moral justification, namely rationalizing immoral behavior as serving an important purpose; advantageous comparison, that is contrasting the behavior in question with an even more reprehensible conduct; and distortion of consequences, especially minimizing the seriousness of the adverse effects of one’s behavior.382 Unethical behavior is also deemed more justified when it benefits not only the actor but others as well—and thereby perceived as altruistic.383
Although studies have shown that people care about fairness,384 it appears that concerns about fairness do not necessarily curb unethical behavior, because fairness is a highly malleable concept. Thus, studies conducted in the context of bargaining have shown that people make self-serving judgments of fairness, and that this bias increases with the complexity of the situation.385 As previously noted, people can also bypass the fairness issue by avoiding information about the effect of their behavior on others.386
So far, we have described situational factors that affect people’s ethical behavior, and the mechanisms that facilitate unethical behavior by ordinary people. To fully understand unethical behavior, however, two further dimensions should be taken into account: individual characteristics—including personality traits and demographic variables—and the social and organizational environment.
A meta-analysis of dozens of studies has found that cognitive moral development and the inclination to attribute life’s events to one’s own conduct (as opposed to external causes, such as fate or powerful others) were both inversely related to unethical behavior. A relativistic moral worldview and Machiavellianism (the inclination to promote one’s interests even if it entails harming or deceiving other people) were both positively related to unethical behavior.387 Based on Albert Bandura’s list of mechanisms of moral disengagement,388 Celia Moore and her colleagues developed a single measure of people’s propensity to morally disengage, based on subjects’ answers to eight questions, and demonstrated its usefulness in predicting several types of unethical organizational behavior.389
(p.76) Turning to demographic variables, the meta-analysis mentioned above found a weak correlation between gender and the inclination to behave unethically (men are more inclined to behave unethically than women), and between age and unethicality (younger people tend to act more unethically). No correlation was found between level of education and unethicality.390
The third dimension explored by behavioral ethics, along with situational factors and individual characteristics, is that of social norms (including organizational culture, when relevant). Psychologists have long studied the conformity effect.391 People’s tendency to adapt their behavior to match group norms is manifested in unethical behavior, as in other contexts. Thus, for example, Francesca Gino and her colleagues examined experimentally how an example of unethical behavior by a confederate affects the behavior of other participants.392 They found that such an example increased unethical behavior when the confederate was perceived as an in-group member, but decreased when he was considered an out-group member.393 The closer people feel to someone who has set an example of unethical behavior, the less harshly they judge that behavior, and the more likely they are to engage in such behavior themselves.394 More generally, it has been demonstrated that cooperation significantly increases unethical behavior.395
E. Reference-Dependence and Order Effects
A common feature of human perception and processing of information, judgment (including self-assessment) and decision-making, prevailing moral convictions, and bounded ethicality, is relativity, or reference-dependence. People’s perceptions, judgments, and choices are strongly affected by context, and are typically comparative in nature, rather than context-independent or reflecting absolute measures.396 This is true of basic perceptions of temperature, brightness, and size.397 It is similarly true of gauging outcomes as either gains or losses relative to some reference point, and the related tendencies to attribute greater weight to losses than to gains, and to display risk aversion in the domain of gains and (p.77) risk-seeking in the domain of losses.398 By the same token, people show a greater tendency to violate moral and social norms to avoid losses than to attain extra gains.399 Evaluations of fairness and justice are similarly comparative in nature.400 Finally, according to prevailing moral convictions, reference-dependence is essential to determining the morality of actions that involve harming and benefitting people, since the distinctions between benefitting and not-harming and between harming and not-benefitting presuppose a reference point.401
Rather than try to systematically review the numerous manifestations of reference-dependence, this section focuses on several aspects of judgment and decision-making, namely the contrast and assimilation effects, order (primacy and recency) effects, the compromise effect, anchoring, and diminishing sensitivity.
2. Contrast and Assimilation Effects
Is it preferable to be a big fish in a small pond, or a small fish in a big pond? This and similar proverbs reflect the familiar observation that assessments, including self-assessments, are largely comparative. However, it turns out that comparisons may lead in different directions. Very often, they result in a contrast effect, whereby people overestimate the differences between the target and the reference. For example, in one experiment, some subjects were asked to name politicians who had been involved in a political scandal, while other subjects were not. When subsequently asked to assess the trustworthiness of specific politicians who were not involved in the said scandal, subjects in the first group judged those politicians as more trustworthy than did subjects in the second group. Evidently, the accessibility of the first names set a particularly low benchmark for the subjects in the first group, against which other politicians appeared to be more trustworthy.402 In another study, subjects who implicitly judged themselves against a well-groomed, highly skilled, and self-confident person tended to show decreased self-esteem, whereas subjects who implicitly compared themselves with an untidy, incompetent, and helpless person showed enhanced self-esteem.403
However, comparisons may also result in an assimilation effect, whereby people overestimate the similarities between the target and the reference. Thus, in the politicians’ trustworthiness experiment, while specific politicians were judged more trustworthy following the increased accessibility of examples of corrupt ones, politicians in general were perceived as less trustworthy.404 In a similar fashion, while exposure to archetypal examples of (p.78) extremely hostile figures (such as Dracula or Adolf Hitler), or extremely friendly ones (such as Santa Claus or Shirley Temple) produced a contrast effect in the subjects’ assessments of the hostility of an ambiguously described individual, exposure to exemplars of moderately hostile or friendly figures yielded an assimilation effect.405
Whether a comparison results in a contrast or an assimilation effect thus depends on various factors, such as the extremity of the reference (the initial assessment is more likely to indicate similarity for moderate than for extreme references),406 and whether self-assessment is made in comparison with an in-group or an out-group member (similarity testing is more likely when the reference and the target belong to the same category, and vice versa).407 It has been suggested that whether a comparison triggers a contrast or an assimilation effect depends on the process of comparison. Whereas similarity testing—leading to assimilation—makes accessible information that suggests similarity, dissimilarity testing (which leads to the contrast effect) makes accessible information that points to dissimilarity.408
Even an apparently insignificant variable—such as whether participants believed that they were born on the same day as the reference—affected the self-evaluation of physical attractiveness by low-self-esteem subjects.409 Those subjects exhibited a contrast effect when assessing their own attractiveness after viewing photos of attractive or unattractive same-sex others, whose birthdays did not match. However, after viewing photos of people whose birthday matched theirs, they assessed themselves as more attractive after viewing photos of attractive people than after viewing photos of unattractive ones.
Contrast and assimilation effects presuppose a comparison of the target with a given benchmark, yet the pertinent benchmark is often not “given,” but rather constructed or selected from among several conceivable ones. Accordingly, much of the research on contrast and assimilation effects—as well as other reference-dependence effects—has dealt with the construction of the pertinent benchmark. It has been suggested that the standard against which events are judged is often constructed post hoc through counterfactual thinking, rather than existing in advance.410
Numerous studies of contrast and assimilation have shown that these effects can be influenced by priming.411 Priming is a process in which exposure to one stimulus—be it sensory information (such as a visual image) or a concept—unconsciously influences the (p.79) subsequent response to the same stimulus and related ones.412 For example, people who are exposed to the word “bird” later recognize this word and the word “sparrow” faster than people who are exposed to the word “building.”413 Similarly, when subjects are first exposed to positive words (such as “adventurous” or “self-confident”) or negative ones (such as “reckless” or “conceited”), and subsequently asked a seemingly unrelated question such as to describe a person who engages in a series of ambiguous activities (such as considering going skydiving), those who were exposed to the positive words tended to describe the person more positively than those exposed to the negative ones.414 The common theory is that information is encoded in cognitive units, which form an interconnected network in our brain. The retrieval of information from memory is performed by spreading activation throughout the network. Priming increases the level of activation and consequently the accessibility of certain information, thereby increasing the rate and probability of recalling the primed information and of recognizing related images and concepts. Priming can influence various cognitive processes, including the construction or selection of a benchmark with which a target is compared.415
3. Anchoring and Adjustment
A specific example of salient information that influences people’s decisions can be found in the context of anchoring. Anchoring alludes to peoples’ tendency to estimate values in relation to certain focal values, or “anchors,” that they are drawn to focus on while making their decisions.416 As a large body of work has demonstrated, anchors can unduly influence peoples’ choices. More specifically, anchors might draw decision-makers toward them, thus causing decision-makers to systematically misestimate target values.
In the typical anchoring study, subjects are asked to estimate the value of a target quantity after being exposed to a certain numeric figure that serves as the anchor. In one of their early studies, Tversky and Kahneman demonstrated how such irrelevant anchors might alter peoples’ evaluations.417 The participants in this study were asked to estimate the percentage of African countries in the United Nations. Before giving their estimates, however, the participants observed a spin of a “wheel of fortune” that was rigged to stop at either 10 or 65 and were asked whether the percentage of African countries in the United Nations (p.80) was higher or lower than the figure that came up on the wheel. This initial meaningless spin greatly influenced peoples’ decisions. Whereas participants who were exposed to a wheel outcome of 10 estimated the number of African countries to be 25 percent of UN member states, those who were exposed to a wheel outcome of 65 estimated them to be 45 percent.
Numerous studies have replicated this result and demonstrated the key role that anchors play in our decisions. Typical studies examine whether, and how, anchors affect the assessment of factual questions—such as the length of the Mississippi River,418 the height and width of the Brandenburg Gate,419 the number of countries in the United Nations,420 and the year that Einstein first visited the United States.421 At some level, these findings are unsurprising: when people are asked to estimate values that they are completely ignorant about, they may grasp at any available piece of information.422
More recent findings on anchoring, however, have demonstrated that the phenomenon is not limited to the narrow category of estimating obscure facts. For instance, Dan Ariely and his colleagues conducted a study in which they demonstrated that anchors can influence peoples’ willingness to pay for goods, such as a rare bottle of wine.423 Subjects who were exposed to a high anchor were willing to pay more than subjects who were exposed to a low anchor. In another study, Robyn LeBoeuf and Eldar Shafir demonstrated that anchoring influences how people assess physical stimuli, such as length, weight, and sound.424 In one of their experiments, participants first listened to a music clip at a volume level of 35 (the participants could not see the numeric representations of volume throughout this experiment). They then listened to the clip again, and were asked to adjust the volume to replicate the volume level that they had previously heard. While half of the subjects started this process from a level of 1 and were required to adjust the volume upward (i.e., from a low anchor), the other half started the process from a level of 70 and were required to adjust the volume downward (i.e., from a high anchor). The results showed that even in this non-numeric, purely physical setting, anchoring affected peoples’ choices: the participants in the low-anchor group chose a volume level that was significantly lower than those in the high-anchor group.
A key aspect of anchoring studies is that they usually build on an uninformative anchor. As noted above, in their original anchoring experiment, Tversky and Kahneman used (p.81) a wheel of fortune to generate the anchor.425 Later studies used other tools, such as the result of a die toss and subjects’ social security number.426 It is this nature of anchors that enables us to interpret the phenomenon as a bias—there is no reason that your valuation of a rare bottle of wine should be influenced by the last two digits of your social security number.427
The JDM literature has identified several potential mechanisms that might drive anchoring. The first focuses on the process of adjustment.428 According to this line of thought, the anchor serves as the starting point for the analysis, and people slowly adjust their estimates from the anchor toward their final estimate. However, this adjustment process tends to end prematurely, and as a result, final estimates are biased in the direction of the anchor. A second theory focuses on the suggestive process triggered by the anchor.429 It asserts that anchoring is an automatic process that occurs subconsciously. The anchor focuses our attention on a certain potential answer to the question that we face, and causes us to retrieve from our memory information that is consistent with the anchor as a plausible solution. Finally, more recent findings suggest that the anchor may distort peoples’ sense of scale.430 According to this interpretation, numerical anchors do not affect one’s representation or beliefs about the target stimulus, but rather alter the response scale by which judgments are rendered.
Given the strong foundations of the anchoring effect, countering the influence of anchors on decision-makers is quite difficult. Adding incentives to the decision-making environment in the form of payments for accurate answers has been showed to be ineffective.431 Similarly, instructions highlighting the effect of anchors did not yield a significant reduction of their effect.432 Even expertise in the relevant area does not seem to matter much. For example, Gregory Northcraft and Margaret Neale conducted a controlled experiment in which experts (real estate brokers) and nonexperts (students) were asked to evaluate the value of property.433 (p.82) The subjects were randomly assigned to either a high or a low asking price, which served as the anchor in the experiment. The results showed that both the experts and the nonexperts were significantly influenced by the list price: a higher list price elicited higher valuations, and vice versa. Interestingly, while the unprofessional subjects conceded that they were influenced by the anchor, the experts presumed that they were immune to its effect.434 Employing the consider-the-opposite strategy did, however, mitigate (albeit not eliminate) the effect of anchors.435 That is, asking people to actively think of arguments in favor of a low value when facing a high anchor (and vice versa) reduces the effect of the anchor on their final decision.
4. Order Effects: Primacy and Recency
Gathering and integrating information is usually a sequential process. Presumably, unless the order in which information is received is meaningful in itself, the order should not affect one’s final judgment or choice. Often, however, human judgment and decision-making do not follow this logic. In a classic study, Solomon Asch presented subjects with a list of personal characteristics, and asked them to describe a person who possessed those characteristics. One group of subjects heard the following list: intelligent, industrious, impulsive, critical, stubborn, and envious. The other group heard the same list in reverse order: envious, stubborn, critical, impulsive, industrious, and intelligent. The ensuing descriptions differed considerably. The subjects hearing the list that began with positive qualities described an able person, whose shortcomings do not overshadow his merits. In contrast, subjects who heard the reverse list described a person whose abilities are hampered by his serious difficulties. Moreover, whereas in the first group most subjects tended to interpret the ambiguous characteristics (being impulsive and critical) in a positive fashion, subjects in the second group tended to portray them negatively.436 Order effects have been documented in various contexts, including attitude and other surveys,437 persuasion in conversational communication,438 legal decision-making,439 auditing,440 and moral judgment.441
(p.83) Asch’s experiment demonstrated a primacy effect—namely, the greater influence of earlier information on the final judgment—which falls into line with the confirmation bias.442 However, some studies have demonstrated a recency effect, that is, a greater impact of the later information on the final judgment.443 As in the case of the contrast and assimilation effects discussed above, there is no simple rule to determine which of the two effects, if at all, characterizes judgment and decision-making under any circumstances. The most notable model, proposed by Robin Hogarth and Hillel Einhorn, and reinforced by subsequent studies, is the belief-adjustment model.444 The model describes an anchoring-and-adjustment process in which various factors, including the complexity of the stimuli, the amount of information items, and whether the information is processed step by step or at the end of the sequence, produce different order effects.445
Order effects depend, among other things, on people’s expectations about the order in which the pieces of information are presented to them. In persuasive communications, people usually expect the most important arguments to be presented first. Accordingly, when experimenters made clear to participants that the arguments were presented in a random order, no reliable order effect was found. As hypothesized, this result was mediated by the perceived importance of the arguments.446 This observation does not, however, pertain to other contexts in which an order effect has been identified.
Order effects have shown to be eliminated or mitigated when people are accountable for their judgment,447 when experts have control over the order in which they review the evidence within their sphere of expertise,448 and when auditors, who worked in groups of three, believed there was a high risk of fraudulent financial reporting.449
5. Compromise and Attraction Effects
Rational choice theory assumes that the relative ranking of two options is context-independent—namely, that the ranking of these options is not influenced by the availability of other options. For example, a customer in a restaurant should not change her ranking (p.84) of the steak and chicken options simply because a fish is added to the menu. However, empirical findings, primarily from the area of consumer behavior, have demonstrated that decisions often display compromise or attraction effects.
The compromise effect denotes peoples’ tendency to choose intermediate rather than extreme options. For example, when consumers were asked to choose between a mid-range and a low-end camera, 50 percent of them chose each type. When, however, they were asked to choose among those two cameras and an additional high-end camera, 72 percent chose the mid-range option.450 Outside the market sphere, the compromise effect may explain decision-making in the political sphere (a choice between different policies)451 and in adjudication (e.g., a choice between different offenses for which a defendant may be convicted).452
The attraction effect refers to instances in which adding an inferior option (a decoy) to a choice set increases the choice share of the superior option it most closely resembles.453 For example, when subjects were asked to choose between a roll of paper towels and a box of facial tissues, more subjects chose the paper towels when the third option was a roll of clearly inferior paper towels, than when the third option was a box of clearly inferior facial tissues. In another experiment, subjects in one condition were asked whether they would like to trade $6 for an elegant pen or keep the money. In the other condition, subjects could trade the $6 for the same elegant pen or for a lesser, unattractive pen, or keep the money. More subjects opted for the elegant pen in the three-option condition than in the two-option one (almost none opted for the lesser pen).454
The compromise effect is relevant to choices involving a trade-off between different attributes, such as product quality and price (when the comparison is one-dimensional, people naturally prefer the superior option). Although it violates the assumption that people’s preferences are context-independent, the strategy of choosing the intermediate option (e.g., the product whose quality and price are both intermediate), appears to be perfectly rational when information problems and uncertainty (e.g., regarding the relative importance of various attributes) render the making of an optimal choice prohibitively costly—as they (p.85) often do.455 Hence, unlike other heuristics, the compromise effect may well be a product of a deliberative process, leading to a choice that is perceived as (among other things) less likely to be criticized by others.456 Indeed, it has been found that cognitive resources depletion (due to engagement in a previous exacting task), which commonly enhances the use of System 1’s heuristics, decreases the compromise effect,457 as do time constraints.458
Moreover, assuming that quality and prices correlate, rational consumers who cannot meaningfully calculate the quality-price trade-off, but who consider themselves as having moderate needs and tastes, may rationally opt for a compromise choice.459
While the tendency to choose an intermediate option may be a rational means of dealing with information problems, it may also, along with the attraction effect, be manipulated by marketers and other persuaders. Thus, a firm may introduce an oversized product, or one of extremely high or extremely low quality, even if it expects very low demand for it, to boost the demand for its other products.460 Concomitantly, policymakers can nudge consumers to decrease their consumption of soft drinks, for example, by requiring sellers to offer small-size drinks along with the large and very large ones.461
6. Diminishing Sensitivity
Reference-dependence underlies yet another psychological phenomenon in perception, judgment, and decision-making—namely diminishing sensitivity: the further a change is from the reference point, the smaller its impact. Two contexts in which diminishing sensitivity has been noted already are prospect theory’s value function and probability weighting.462 The reflection effect—the decreasing marginal effect of both gains and losses, resulting in risk aversion in the domain of gains and risk-seeking in the domain of losses—signifies a diminishing sensitivity to outcomes the further away they are from the reference (p.86) point. As for probabilities, the greater impact of moving from impossibility to low probability and from certainty to high probability, compared with similar changes in intermediate probabilities, signifies a diminishing sensitivity to changes in probability the further away they are from the two boundaries.
More specifically, diminishing sensitivity explains why adding a new feature to a product with relatively inferior existing features increases the demand for the product more than adding the same feature to a product with relatively superior quality.463 It also explains why—contrary to standard economic analysis—a consumer may drive across town to buy a product for $30 rather than $40 (thus saving $10), but would not make a similar effort to buy one for $2,970 instead of $2,990 (thereby saving $20).464 By the same token, offering gifts is a more effective marketing technique than small price reductions: since a gift is valued separately, receiving it is compared with not having it, rather than as a tiny decrease of a large loss.465 Another finding compatible with diminishing sensitivity is the fact that people tend to make a greater effort to achieve a goal the closer they are to accomplishing it. For example, it was found that members of a reward program, who were entitled to a free cup of coffee after purchasing ten cups, increased the frequency of buying coffee the closer they were to earning the free cup.466
Diminishing sensitivity refers also to spatial distances. It has been invoked to explain why consumers prefer a shopping tour consisting, for example, of three journeys of Home–Store 1–Store 2–Home in which the distances are 40–10–40 miles, over a tour in which the distances are 30–30–30, although the total travel distance is the same.467
Finally, diminishing sensitivity is akin to the notion of psychic numbing.468 While people recognize that every human life is of equal value, the effort they are willing to exert to save human lives (or otherwise help other people) diminishes as the number of endangered victims increases. Thus, for example, people would be willing to make greater effort to save the lives of nine out of ten endangered people, than to save the lives of ten out of ten thousand.
Commentators sometimes classify the entire realm of deviations from economic rationality into three categories: bounded rationality (deviations from thin, cognitive rationality), bounded self-interest (deviations from thick, motivational rationality), and bounded willpower (behaving in a manner that people “know to be in conflict with their own long-term interests”).469 While the third category is not nearly as large or discrete as the first two, we nevertheless discuss it separately, because it does not fit in easily with the other categories. This section discusses first procrastination, then myopia and bounded willpower.
Unlike an intentional avoidance of a task or a decision, procrastination involves a voluntary delay of the beginning or completion of a task, or of making a decision, despite the procrastinator’s realization that the delay adversely affects his or her interests and may even result in harmful nonperformance or no decision.470 As Amos Tversky and Eldar Shafir have put it: “Many things never get done not because someone has chosen not to do them, but because the person has chosen not to do them now.”471 Procrastination appears to be a very common phenomenon, resulting in poor performance and considerable monetary and other losses to procrastinators, as well as self-resentment.472 Procrastination may also negatively affect others besides the procrastinator, as in the case of delaying contributions to public causes.
People vary in their tendency to procrastinate. Some studies have demonstrated that this tendency is consistent across time and context, which means that it can be thought of as a personality trait. Of the big-five personality dimensions,473 procrastination is closely correlated with conscientiousness and its constituents. Thus, the tendency to procrastinate is negatively correlated with organization (planning and structuring one’s endeavors), and achievement-motivation, and positively correlated with distractibility (failure to manage distracting cues), and the intention–action gap (the degree to which people do not follow up on their plans).474 It has also been shown that more overly optimistic people are more prone to procrastinate when they face an unpleasant task.475
(p.88) Procrastination depends on the characteristics of the task at hand. The further away the task’s expected rewards or punishments, the greater the tendency to procrastinate,476 which arguably reflects people’s hyperbolic discount rate of future costs and benefits, discussed below.477 Similarly, the more boring or unpleasant a task or a decision is, the more likely it is to be postponed.478
Given the prevalence and harmfulness of procrastination, considerable attention has been given to means of overcoming it, including self- and externally-imposed deadlines, and mandated decision-making. As for deadlines, in one study a paid task was completed by 60 percent of the participants who were given a five-day deadline, by 42 percent of those given a three-week deadline, and by only 25 percent of those receiving no deadline.479 Deadlines may, however, induce people to do things that are arguably less desirable, such as appealing exam grades and court judgments.480 A few studies have compared the efficacy of self- versus externally-imposed deadlines, with mixed results: while some found that self-imposed deadlines are more effective at ensuring performance,481 others showed that externally imposed deadlines are more effective.482 Another antidote to procrastination (and the omission bias) is compelling people to make decisions. For example, people who apply for a driver’s license may be required to indicate whether they consent to donate their organs posthumously, and new employees may be required to decide whether to enroll in a pension plan.483
2. Myopia and Bounded Willpower
A large body of experimental and theoretical research in economics and psychology has studied people’s choices involving costs and benefits occurring at different times—particularly the tendency to discount future costs and benefits compared with immediate ones.484 This research on intertemporal preferences has largely emerged in response to normative economic models, and has been advanced by both behavioral economists and (p.89) psychologists. Consequently, much of the literature tends to discuss intertemporal choices in isolation, rather than as one aspect of the broader issues of self-regulation and self-control.485 A detailed discussion of self-control failures—associated, inter alia, with issues of crime and violence—is beyond the scope of the present discussion.486 However, from psychological and legal policy perspectives, the issues of myopia and self-control are hardly distinguishable in contexts such as spending versus saving, consumption of unhealthy food, and smoking. Hence this subsection discusses both intertemporal choices and closely related issues of self-control.
The standard economic model for intertemporal choices, proposed in 1937 by Paul Samuelson, has long assumed that people discount future costs and benefits at a constant discount rate.487 Indeed, in many contexts, discounting of future costs and benefits is perfectly sensible. Receiving a sum of money earlier may enable a person to earn interest on that sum, repay an interest-bearing debt, or otherwise invest the money profitably. However, people discount future outcomes even when this logic does not apply, such as with regard to health conditions and the saving of human lives (e.g., in choosing between saving the lives of ten people tomorrow versus saving the lives of eleven other people a year from now). Moreover, people’s subjective discount rate is often much higher than any available interest rate. Most important, it appears that people’s discount rate is generally not constant but rather hyperbolic—that is, it declines as time increases.488 For example, many people would prefer to receive ten dollars today than twelve dollars in two weeks’ time—yet would rather receive twelve dollars in a year and two weeks’ time, than ten dollars in a year from now. A hyperbolic discount rate implies that people have time-inconsistent preferences, depending on when they make the choice.
Intertemporal preferences pertain to a wide range of outcomes and circumstances, and are very often confounded with other factors. For example, the decreasing marginal utility of resources implies that the temporal discount rate of a person who prefers to receive $100,000 immediately over receiving $200,000 in a year’s time is considerably lower than 100 percent, because typically, the expected utility derived from $200,000 is significantly lower than twice the utility from $100,000. By the same token, in real life, the further away outcomes are, the greater their uncertainty, which in turn likely decreases their expected (p.90) value. For example, the utility from a given amount of money in the future depends on one’s financial condition at that time: it would be lower if one somehow became considerably richer in the interim—and higher if one became considerably poorer. Indeed, the future utility might even be nil if one passes away before the designated future time. Even if these and comparable factors could somehow be separated, it is highly unlikely that any individual would have a single (constant or hyperbolic) discount rate for different objects, time spans, and outcome magnitudes. In fact, the notion of a single discount rate is not borne out by the available evidence.489
In addition to time-inconsistency, studies have revealed that gains are discounted at a higher rate than losses.490 In fact, a substantial proportion of subjects prefer to incur a loss immediately rather than to put it off.491 Intertemporal choices are also vulnerable to framing effects: people are willing to pay considerably less to expedite the receipt of a given good from T2 to T1 than they demand in return for delaying its receipt from T1 to T2.492 Finally, contrary to the logic of discounting future costs and benefits, people prefer improving sequences of good outcomes (such as gradually increasing wages) to declining ones,493 and decreasing sequences of bad outcomes (such as physical discomfort).494 Some of these characteristics, including the diminishing sensitivity to temporally remote outcomes, reference-dependence, and the gain-loss asymmetry, are analogous to prospect theory’s value function.495
Myopic behavior interacts with other biases in the perception and processing of information, as well. For example, when people set saving or dietary goals, they perceive goal-consistent behaviors (such as saving a certain amount of money) as contributing more to attaining the goal than they perceive goal-inconsistent behaviors (such as spending the exact same amount) as obstructing it. The more people expect to meet their goal, the greater this so-called progress bias.496
Excessive discount rates are related to issues of impulsiveness and self-control. The famous marshmallow experiments, conducted by Walter Mischel and his colleagues in the 1960s and 1970s, examined four-year-old children’s ability to forgo immediate gratification (p.91) in order to get a larger, delayed reward, and the strategies they used to achieve this goal.497 Interestingly, follow-up studies have shown that children who displayed greater self-control in those experiments tended to be more cognitively and academically competent and to cope better with frustration and stress in adolescence,498 and to do better in interpersonal relationships as adults.499 Another study found correlation between high discount rates and (self-reported) earlier age of first sexual activity and recent relationship infidelity, smoking, and higher body mass index.500
The context-dependence of discount rates, their vulnerability to framing effects, the aforementioned progress bias, and the close link between myopia and self-control—all cast doubt on the very notion that people’s myopic behavior can be adequately captured by a utility-discounting function. Accordingly, several alternative accounts of people’s intertemporal choices have been proposed. One such account is analogous to dual-process theories of decision-making. George Loewenstein has pointed out that when people act against their own long-term interests, they are often aware that this is the case, but experience a feeling of being “out of control.”501 Often, such behavior arises from impulse or sudden emotion, such as hunger or craving. The immediate, powerful effect of these visceral factors crowds out other goals. Furthermore, people tend to underestimate their own susceptibility, and the susceptibility of others, to these factors. Experimental support for this account was provided by a study in which subjects who performed a cognitively demanding task, thus using much of their deliberative resources elsewhere, tended to choose a less healthy food.502
Relatedly, according to the construal level theory, the mental representation of chronologically remote outcomes is more abstract than of chronologically close ones.503 Accordingly, it has been hypothesized that people’s smaller WTP for expediting the receipt of a given good, compared with their WTA for delaying its receipt, has to do with the initial construal of receiving it as abstract (in the former framing) or concrete (in the (p.92) latter). Indeed, it has been found that asking subjects to concretely visualize the moment of receiving the good and using it—thus removing this difference between the two framings—eliminated the WTA-WTP difference.504
It has also been suggested that intertemporal choices may be the product of simple heuristics (rather than settled intertemporal utility functions), taking into account the absolute differences and relative percentage differences of the attributes of the outcomes in the relevant choice set.505 Additional psychological determinants that impinge on intertemporal choices are the degree to which people feel connected to their future selves,506 inattentiveness to the future ramifications of present behavior,507 and people’s perception of future time durations.508
Unlike some erroneous logical inferences, and similarly to phenomena such as loss aversion, the very existence of a high discount rate is not “irrational” per se.509 However, hyperbolic discount rates imply that choices are not time-consistent. They are also associated with impulsiveness, myopia, and deficient self-control. While it is possible to incorporate such phenomena, and the measures people take to overcome them, into economic analysis by stretching the notion of “information costs” or by modeling individuals as consisting of “multiple selves,”510 such extensions are no substitute for empirical study of these phenomena, their personal and social costs, and the possible ways of dealing with them.
Myopia and failures of self-control have particularly large, adverse effects in the contexts of dieting,511 smoking,512 drug addiction,513 saving for retirement,514 and consumer behavior.515 These issues have several common features. First, people’s failure to (p.93) behave in accordance with their long-term interests results in major harms to themselves and significant social ills, such as obesity and insufficient savings for old age. Second, firms often take positive steps to induce myopic behavior for their own benefit—such as promoting the consumption of unhealthy food through aggressive advertisement, encouraging smoking, and convincing people to borrow for present consumption instead of saving for old age. Third, people sometimes use precommitment, self-paternalistic devices to curtail their impulsive decision-making. Keeping away from tempting food products and cigarettes, and depositing money in saving accounts with no option for early withdrawal, are two examples. Fourth, the market sometimes offers mechanisms that help people overcome their myopia (such as saving plans), and new proposals for such mechanisms are constantly being put forward and examined.516 Notable examples are changing the default from employees’ nonparticipation to participation in retirement saving plans,517 and employees’ precommitment to increasing the percentage of their salary saved for retirement whenever they get a salary raise.518 Finally, governments throughout the world take measures to deal with the social problems associated with failures of self-control. These measures range from very mild, psychologically inspired nudges—such as requiring producers to inform consumers about products’ risks in a more salient and vivid manner—to compulsory measures, such as outlawing particularly unhealthy food products, and criminalizing the sale of tobacco products and alcohol to minors.519
G. Moral Judgment and Human Motivation
Philosophers debate the question of how important it is for a moral theory to align with prevalent, deeply held moral intuitions. Some hold that a moral theory that does not fit with at least some intuitions is unacceptable, while others give very little weight to this criterion. Many philosophers take intermediate positions (the fact that people sometimes have different intuitions, or that the same individual may have conflicting intuitions about a particular question, further complicates the issue). One need not resolve this question to acknowledge that psychological studies of moral intuitions are interesting in their own right, and are important for policymaking. People’s moral judgments are important whenever one aims to understand, predict, or influence people’s behavior, because moral judgments influence behavior. Contrary to the assumption of rational choice theory, there is much (p.94) evidence that people are not driven exclusively by self-interest, but also by moral norms and prosocial motivations.
Prevailing moral intuitions are also important for the law. If the validity of legal norms depends on their morality (as some theories of law contend), and if the validity of moral theories depends (at least to some extent) on their compatibility with moral intuitions, then moral intuitions are essential to the law. Moreover, even if one denies that the law’s validity hinges on its morality, or that compatibility with moral intuitions is vital to a moral theory, the compatibility of legal norms with prevailing moral intuitions is important for purely instrumental reasons: people are more likely to follow legal norms that they perceive as just and desirable than norms that they perceive as unjust.520 The compatibility of legal norms with prevailing moral intuitions may also be mandated by democratic principles.
In the following subsections we briefly discuss several aspects of people’s moral judgment and motivation.521 We begin with the distinction between consequentialist and deontological morality. While normative economics rests on consequentialist morality, many studies have shown that most people predominantly reason, and conduct themselves, as moderate deontologists. We then turn to more specific aspects of people’s perceptions of justice, with particular focus on notions of substantive and procedural fairness, and the belief in a just world. Next, we discuss another aspect of human behavior that appears to be at odds with the postulates of standard economic analysis—namely, people’s prosocial and altruistic behavior. Finally, we touch upon the relationships between moral judgments and the distinction between intuitive and deliberative judgment and decision-making.
2. Deontology versus Consequentialism
(a) Normative Ethics
Welfare economics is a consequentialist moral theory. It holds that the only factor that ultimately determines the ethicality of acts, omissions, or anything else is their consequences, and mandates that people should always promote the best outcomes. It recognizes no deontological constraints on promoting the good, or options to prioritize other goals.522 In contrast, while deontological moral theories acknowledge the importance of promoting good outcomes, they deny that promoting them is the only morally decisive factor.523 Deontological theories prioritize such values as autonomy, basic liberties, truth-telling, fair play, and promise-keeping over the promotion of good outcomes. They include constraints on attaining the best outcomes. At the same time, deontological morality admits of options: in many circumstances, an agent may legitimately give precedence to her own interests or the interests of her loved ones or members of her community over the (p.95) enhancement of the overall good. Thus, the affluent need not donate most of their money to alleviate the suffering of the underprivileged. They may legitimately spend their money on “luxuries” such as going to the movies and reading fiction, even if giving that money to the poor would enhance net human welfare.524 Deontological theories thus recognize agent-relative constraints (on promoting the good) and agent-relative options (of not promoting the good).525
The central deontological constraint is against harming other people. It usually includes restrictions on violating rights such as the rights to life and bodily integrity, human dignity, and freedom of speech. It also includes special obligations created by promises, and restrictions on lying and betrayal.526 There is additionally a “deontological requirement of fairness, of evenhandedness or equality in one’s treatment of people.”527
The notion of agent-relativity implies that there is a difference between the duty to refrain from violating a constraint and the duty not to bring about, or to prevent, other violations—even where such violations are the expected outcome of avoiding the current one. Otherwise, the prohibition of killing one person to save two others would preclude both killing that person and not killing him (thereby allowing the death of the other two). Deontology therefore must resort to a distinction between actively violating a constraint and not preventing the violation of constraints by others—or some such distinction.528 In the context of the constraint against harming people, deontology thus distinguishes between actively harming a person and not aiding her (often referred to as the doing/allowing distinction).529 While doing harm is at least presumably immoral, allowing harm is not ordinarily regarded as such. At the very least, the constraint against active harming is much stricter than the duty to come to the aid of others.
Another distinction deontologists often draw is between intending harm and merely foreseeing it. Intending harm is immoral even if the harm is merely allowed, while foreseeing harm is not necessarily immoral.530 The constraint against intending harm forbids not only harming a person as an end, but also as a means to attaining another goal. Thus, killing someone to inherit her money is an intended harm, even if the killer would have preferred (p.96) that there were other ways of obtaining the money. Using a person as a means violates the requirement to respect people as ends.
These distinctions are often discussed in reference to the trolley problem.531 Suppose that an uncontrolled trolley is hurtling down a track. Directly in its path are five people, who cannot escape and will be killed by it unless it is diverted. An agent can flip a switch that would divert the trolley to another track, where it would kill a single individual. Should the agent flip that switch? Alternatively, suppose that the only way the agent can save the five people is by flipping a switch that would cause another individual to fall off a footbridge onto the track, thereby blocking the trolley and killing that individual. Should the agent cause the fall of the other individual? While some deontologists would object to flipping the switch in both cases, others may find diverting the trolley morally permissible, or perhaps even imperative, while causing the fall of the individual morally forbidden. While both killings are active, they ground the difference in the distinction between killing as a mere side effect (in the diversion scenario) and killing as a means (in the footbridge scenario).
Deontological moral theories are either absolutist or moderate.532 While absolutist deontology maintains that constraints must not be violated for any amount of good outcomes, moderate deontology holds that constraints have thresholds: a constraint may be overridden for the sake of furthering good outcomes, or avoiding bad ones if sufficient good or bad is at stake. For example, even the constraint against actively/intentionally killing an innocent person may be justifiably infringed if it is the only way to save the lives of thousands of others.533 The thresholds that have to be met to justify the infringement of other constraints, such as those against lying or breaking one’s promise, are much lower. Correspondingly, deontological options need not be absolute: when enough good or bad outcomes are at stake, there is no longer an option not to further the good or avoid the bad. In determining the amount of good/bad outcomes that may justify infringement of a constraint, a moderate deontologist may reasonably take into account both the doing/allowing and the intending/foreseeing distinctions. Thus, the threshold that has to be met to justify harming someone when the harm is intended is plausibly much higher than when it is a mere side effect.
Moderate deontology not only forbids the infringement of moral constraints unless a sufficiently large net benefit is produced by such an infringement, but also excludes or gives lesser weight to certain costs and benefits when determining whether the net benefit meets a given threshold. For example, deontology may hold that certain values take lexical priority over others (e.g., human lives versus pecuniary losses); that small benefits (such as eliminating headaches) should not be taken into account at all when more serious values (such as human lives) are at stake; that chronologically distant benefits and costs (p.97) should be hugely discounted; and that eliminating bad outcomes takes precedence over promoting good ones.534 Lastly, deontological morality may distinguish between harming (saving) an unidentified person and an identified one. Whereas from a consequentialist viewpoint harming (or saving) an unknown individual is identical to harming (or saving) an identified one, deontology may distinguish between the two, and find the latter more objectionable (or justifiable).535
(b) Behavioral Studies
While ethicists hotly debate which normative theory is correct, they have long recognized that of the three families of theories—consequentialism, absolutist deontology, and moderate deontology—the third is most consistent with commonsense morality, or prevailing moral convictions.536
Both absolutist deontology and simple consequentialism are often counterintuitive. For example, the absolutist judgment that one must never actively or intentionally lie—even if, by doing so, one might save the life of an innocent person—sounds rather strange to most people. At the same time, most people find the consequentialist judgment that it is morally obligatory to kill 100 innocent people as a means to saving the lives of 101 others (or to save the lives of 100 others and prevent a minor injury to one more) equally abhorrent. In fact, both consequentialists and absolutist deontologists go to great lengths to try and square their respective theories with prevailing moral convictions. For example, absolutist deontologists may draw a fine line between lying and failing to tell the truth, to avoid the untenable results of an absolute prohibition on lying. At the same time, consequentialists may shift from act- to rule-consequentialism—which, in the main, is a way of providing a consequentialist foundation to commonsense morality.537
Numerous experimental studies have indeed demonstrated that most people’s moral judgments are neither consequentialist nor absolutist deontological. One line of research has studied the related notions of protected values and taboo trade-offs, which deal with values that resist trade-offs with other values (especially economic ones).538 Contrary to consequentialism, it has been found that many people initially opine that such values should (p.98) never be violated (for example, that doctors should never remove dying patients’ organs without their consent). However, contrary to absolutist deontology, when asked to think of counterexamples, many of those espousing protected values qualify this statement.539 Moreover, politicians making policy decisions often face unavoidable trade-offs that involve protected values.540 Policymakers who must decide whether to invest in public health programs, highway safety, saving an endangered species, or simply balancing the budget cannot escape the need to put such goals in a single policy metric. Yet, when doing so they must be careful, since treating a protected value like any other commensurable good is tantamount to “political suicide.”541 Consequently, the public discourse surrounding protected values tends to resort to rhetorical obfuscation.542 By labeling a policy choice as “moral” or “just,” rather than as “efficient” or “cost-justified” people can overlook their transgression with regard to the protected value.543
It has also been shown that, contrary to consequentialism, people are viewed—and view themselves—as bearing a greater moral responsibility for harmful outcomes that they actively bring about, as opposed to those that they passively allow to happen.544 These studies have also demonstrated that for most people, the prohibition of actively causing death has thresholds, such that infringing actions are permissible if they are the only way to prevent sufficiently larger number of deaths.545
In recent years, many studies have examined people’s reactions to various versions of the trolley problem and comparable moral dilemmas. For example, most subjects judge harmful actions to be morally worse than harmful omissions, and intended harm as worse than foreseen harm.546 Most subjects believe that harming a person in order to save others (intended harm) is unacceptable, while harming a person as a side effect (p.99) of saving others (foreseen harm) is permissible—although many subjects are unable to provide an adequate explanation for this distinction.547 Subjects also tend to judge harm involving physical contact as morally worse than harm without contact.548 Contrary to the agent-neutrality mandated by simple consequentialism, and in line with deontological agent-relativity, it was found that people judge both intended and foreseen killing in a bid to save oneself and others as more acceptable than killing to save only others.549 In the same spirit, sacrificing a stranger to save several people is deemed more acceptable than sacrificing a relative.550
It has also been demonstrated that, contrary to absolutist deontology, people justify the active killing of one person as a means to saving a vast number of other people.551 A divergence between absolutist deontology and prevailing moral judgments has also been found in experimental designs in which subjects thought that killing the person in the standard footbridge scenario was permissible.552
In summary, while people’s moral judgments vary—some are consequentialist, some are absolutist deontological, and some are moderate deontological (and the judgments of the same person may vary from one context to another)—most moral judgments appear to be more in line with moderate deontology than with either consequentialism or absolutist deontology. People tend to believe that maximizing good outcomes is subject to moral constraints—including the constraint against actively or intentionally harming other people—but that these constraints may be overridden if good or bad outcomes of sufficient magnitude are at stake. It should also be noted that large-scale experiments and surveys have revealed a remarkable uniformity in people’s reactions to various versions of the trolley problem and comparable moral dilemmas.553 An analysis of the moral judgments of thousands of people, from around the world, has shown that while variables such as gender, (p.100) education, political involvement, and religiosity yielded statistically significant effects, these were nevertheless extremely small, and inconsistent.554
It has been argued, based in part on neurological studies, that deontological judgments are more associated with emotions, and consequentialist judgments with deliberative thinking.555 Patients with focal bilateral damage to a brain region involved in the normal generation of emotions produced “an abnormally ‘utilitarian’ pattern” of judgments in trolley-like dilemmas (in other classes of moral dilemmas, the judgments of patients with similar brain damage were normal).556 Correspondingly, people with a propensity for an intuitive mode of decision-making were found to give more weight to deontological constraints than those with a tendency for deliberative thinking.557
However, Shaun Nichols and Ron Mallon have demonstrated that people’s judgments reflected the deontological distinction between intending harm and merely foreseeing it, even in scenarios involving no bodily harm to anyone, thus casting doubt on the claims that deontological constraints are primarily driven by emotions.558 Similarly, coping with moral dilemmas while performing a cognitively demanding task—a manipulation reducing people’s resort to System-2 reasoning—did not affect subjects’ sensitivity to the conflicting moral arguments.559
Furthermore, studies of trolley-type dilemmas have arguably confounded deontological versus consequentialist judgments and intuitive versus counterintuitive judgments. Accordingly, it has been found that counterintuitive moral judgments—be they consequentialist or deontological—were associated with greater difficulty and activated parts of the brain involved in emotional conflicts.560 In general, in recent years there is a growing consensus that moral judgments are reached by multiple systems at once—both affective and cognitive—and involve both emotions and principles, intuition and deliberation.561
(p.101) Besides the greater support for moderate deontology than for either consequentialism or absolutist deontology, more specific psychological phenomena are compatible with (moderate) deontology as well. For example, in recent years several experimental studies have established the identifiability effect—namely, people’s tendency to react more generously or more punitively toward identified individuals than toward unidentified ones.562
To be sure, neither the prevalence of deontological moral convictions, nor the (moot) argument that consequentialist reasoning is more deliberative, prove that either type of moral theories is philosophically superior to the other.563 But even if prevailing moral convictions are wrong, since compatibility of legal norms with prevailing moral judgments is important for principled and instrumental reasons, policymakers should take these findings into account.564
3. Fairness and Social Justice
Following Aristotle, philosophers and jurists commonly distinguish between two primary forms of justice: corrective and distributive. Corrective justice deals with the duty to remedy wrongful losses that one person inflicts on another in voluntary (e.g., contractual) or involuntary (e.g., tortious) interactions. It is attained by depriving the gainer of her ill-gotten gains and remedying the loser’s losses. Distributive justice deals with the allocation of benefits and burdens among members of society. It requires that each person receives the allocated benefits or burdens in proportion to the pertinent criterion (such as merit, need, or excellence).
Since the 1960s, social psychologists have extensively studied people’s judgments of justice and fairness in various contexts. The social psychology literature does not usually distinguish between corrective and distributive justice—often using the latter term to include the former as well. Social psychologists contrast “distributive justice” with procedural justice—the fairness of the procedures by which allocation decisions are made. Some social psychology studies have also investigated retributive justice—namely, the psychological processes relating to punishing people who violate social, legal, or moral norms. This (p.102) subsection discusses substantive (“distributive”) fairness and procedural fairness, as well as a phenomenon that is relevant to all forms of fairness judgments: the belief in a just world. Retributive justice is alluded to elsewhere in the book.565
(b) Substantive Fairness
The most influential theory in the social-psychological study of substantive fairness has been equity theory. It posits that people perceive that they are treated fairly when the ratio between their received outcomes (for example, their salary) and their input (e.g., the effort, talent, and commitment they put into their work) is equal to the ratio between the received outcomes and the inputs of other people.566 A key element of equity theory is that people are distressed not only when they are treated less favorably than they feel they deserve, but also—albeit to a lesser extent—when they are treated more favorably. When people are treated unfairly and less favorably than others, both fairness and self-interest are violated, which in turn leads to greater resentment.
When people feel that they are treated unfairly, they might restore equity in several ways, including by increasing or decreasing their contributions, by changing their assessment of their own input or output or those of others, or by quitting the relationship entirely. Perceived unfairness may also lead to unethical behavior, such as stealing from one’s employer.567 People may object to unfairness not only in their own relationships, but also in relationships between others.
Thus, contrary to rational choice theory, psychological studies reveal that people care about fairness even when it is at odds with, or unrelated to, their self-interest. Indeed, a meta-analysis of dozens of studies has shown that outcome fairness has a stronger effect on variables such as organizational commitment than outcome favorability.568
The claim that fairness serves as a constraint on profit maximization has also been established by experimental game theory. Two pertinent games are Ultimatum and Dictator. Ultimatum is a game in which one person (the proposer) is asked to divide a sum of money between herself and another person. The other person (the responder) may either accept the proposed division (in which case the division is implemented), or reject it (in which case both players receive nothing). Dictator is a game where one party unilaterally decides how to divide a sum of money between herself and another person. Rational choice theory predicts that in an Ultimatum game the proposer will offer the responder the smallest unit (p.103) of money used in the game and the responder will accept this offer; and that in a Dictator game the dictator will appropriate the entire sum. However, numerous experiments have established that in Ultimatum games most proposers offer responders a generous share of the pie (40 percent on average) and that responders reject very low offers.569 These results were obtained even under conditions of complete anonymity, thus indicating that it is not only the fear of retaliation that induces people to behave fairly. Responders’ rejections of clearly disproportionate divisions in the Ultimatum game indicate that people are willing to bear some costs to punish others for what they perceive as an unfair division of resources.570 Even in the Dictator game, while a substantial minority (36 percent) keep all the money for themselves, most people share a substantial fraction of their endowment (28 percent on average) with the passive participant.571
The concern among commercial enterprises about fairness—be it for its own sake or as a means of maintaining positive reputation—may explain otherwise puzzling market behaviors, such as the failure of firms to immediately raise prices when excess demand is not accompanied by increase in suppliers’ costs (contrary to standard economic models).572 The fairness constraint may also explain the rarity of very high contingency fee rates in the market for legal services, even when such rates would be mutually beneficial.573
Equity theory has greatly advanced our understanding of exchange relationships, that is, relationships in which people give something and get something in return. However, it neither satisfactorily explains fairness judgments in other contexts (such as allocation of civil and political rights), nor does it provide a complete explanation of people’s judgments of fairness in exchange relationships. Studies have shown that while equity is the primary determinant of fairness in exchange relationships, in the context of minimizing suffering, and in intimate relationships such as within a family or among close friends, needs are an important factor, as well.574 Equal division is preferred over an equitable one in contexts that emphasize cooperation and partnership.575 Interestingly, where both equity (the outputs/ (p.104) inputs ratio) and equality (similar outcomes for all) are plausible criteria, men tend to prioritize equity, while women are more inclined toward equality.576
A common denominator of people’s judgment of fairness is the key role played by social comparison. Whether one adopts equity, equality, need, or any other distribution criterion, its implementation requires comparisons with others: what others have received (in the case of equality), the ratio between other people’s contributions and outcomes (in the case of equity), other people’s neediness (in the case of need), and so forth. It follows that people may perceive the same output as more or less fair, depending on which reference group they compare themselves (or others) with.577 Various psychological factors determine which competing reference point prevails.578 People may also draw a comparison between their current output/input ratio and their previous one, which may lead to a different judgment than a comparison with other people. Judgments of fairness thus depend on a variety of factors, including personal traits and heuristics (such as availability) that determine the perceived reference group.
Additional limitations of equity theory stem from the fact that both contributions and outputs are often multidimensional and do not lend themselves to easy quantification and summation. For example, some workers may be more hard-working but less productive or less innovative than others. Similarly, workers’ outputs comprise not only their salaries but also non-monetary benefits, respect, and so forth. Judgments of fairness are particularly challenging when an outcome comprises several elements and people differ in their assessment of the comparative worth of those elements.579 Finally, it has been demonstrated that people may have an independent motivation “to do the right thing.” Hence, when they choose between two prosocial courses of action (e.g., one that minimizes inequality and one that maximizes overall welfare), they are more likely to choose the one that is labeled the moral choice, whatever it is.580
(c) Procedural Fairness
One important challenge to equity theory (and to other theories focusing on the fairness of outcomes) has been posed by studies that demonstrate that people care about the fairness of the processes by which decisions about allocation of benefits and burdens are (p.105) made, sometimes no less than about the outcomes of those decisions.581 Both laboratory experiments and field studies have demonstrated that people are more willing to accept unfavorable outcomes when they are the product of a fair process—particularly if it allowed them to express their concerns. People care about procedural fairness both in resource allocation and in dispute resolution contexts.582 Similarly, they value procedural fairness in their encounters with public authorities, such as the police.583 While this phenomenon was initially explained by people’s desire for power and process control,584 subsequent studies have highlighted the importance of dignity and respect within social groups, and the maintenance of ongoing relationships.585 People care about procedural justice both because they believe that a fair procedure—particularly a fair opportunity to voice their concerns before a decision is made—is more likely to produce a favorable allocation, and because they care about procedural fairness per se.586 Procedural fairness may also serve as a heuristic for the fairness of outcomes, when the latter is difficult to assess.587
People may reasonably disagree as to the fairness of particular procedures. As in the context of substantive fairness, here too, social comparisons—that is, comparing the procedures applied in one’s own case with those applied in others’ cases—play an important role in people’s judgments of fairness.588
While the importance of perceived procedural fairness can hardly be denied, the precise relationships between different aspects of procedural fairness, the relative importance of procedural versus substantive fairness, and the complex, context-dependent interactions between these (and other) aspects of fairness, are the subject of ongoing debates.589 Research in the field of policing, for example, indicates that certain measures are perceived negatively even when conducted with strict adherence to the dictates of procedural justice.590 (p.106) Apparently, politeness coupled with a genuine willingness to listen cannot negate the adverse effects of a highly intrusive police search.
(d) Belief in a Just World
The last phenomenon to be mentioned in the present context—belief in a just world591—operates at a different level than the notions of substantive and procedural fairness discussed above. People have a need to believe that they live in a just world, that they and others deserve their fate. They believe that efforts and good deeds are reciprocated. Such belief encourages people to commit to the pursuit of long-term goals, and helps them cope with their own misfortunes. In these respects, it seems to be personally and socially beneficial. However, the belief in a just world may hinder attempts to advance necessary social changes, since both the privileged and the underprivileged may approve of the status quo.592
People experience distress and threat when they observe, or come to know about, people who suffer undeserved misfortune, and use various means to avoid such distress. Helping or compensating the victim is one possibility;593 dissociation from the victim is another.594 The most studied—and most troubling—device is to blame or derogate the victim. People who are otherwise unable to restore justice, tend to devalue and denigrate those who are victims of various crimes, the impoverished, and those who are sick with cancer and other diseases.595
4. Prosocial Behavior and Altruism
(a) Helping Others
Contrary to rational choice theory, people often do not act egoistically, but rather for the benefit of others and for society at large. The term prosocial behavior is used in social psychology to cover a wide range of phenomena, including coming to the aid of people in emergency situations, contributing money to charity, volunteering in communities, voting, and participating in social movements. The term altruism denotes a possible motivation for action, namely the desire to benefit other people. The two notions often overlap, but there can be prosocial behavior that is not altruistically motivated, and altruism does not necessarily translate into action.596
(p.107) While most early studies focused on interpersonal helping, more recent research has been extended to planned and continuous activities by groups of people.597 Furthermore, a comprehensive concept of prosocial behavior includes not only unilateral benefitting, but also reciprocal relationships of cooperation between equally situated individuals or groups—a topic extensively studied by experimental economists.598
A basic question in the study of individual and collective prosocial behavior is what determines whether a person will act in a prosocial manner. With regard to the paradigmatic situation of a bystander who may or may not intervene in an emergency situation, numerous studies have highlighted the importance of situational determinants, with particular emphasis on the presence of other people at the scene. In addition to the bystander effect—the phenomenon that an individual’s likelihood of coming to the aid of another person decreases when other passive bystanders are present, due to a subjective diffusion of responsibility—several other factors have been found to affect this likelihood. Inter alia, people are more likely to intervene when the other person’s need is more vivid, more severe, and less ambiguous; when the other person is a friend rather than a stranger; when the costs of helping are low; and in rural areas (compared with urban locations). Finally, it has been found that a larger number of bystanders may increase, rather than decrease, the likelihood of intervention when intervening by oneself is dangerous and assistance of others reduces that danger.599
Contrary to the established effect of such situational variables, early studies did not find clear correlations between the tendency to intervene in a bystander situation and specific personal traits, such as religiosity, self-esteem, or social responsibility. However, subsequent studies have found that aggregate measures of prosocial orientation, and certain interactions between situational and dispositional variables, do provide good predictors of people’s likelihood to come to the aid of others.600 Prosocial behavior is positively correlated with the likelihood of experiencing affective and cognitive empathy and feeling responsibility for the welfare of others, as well as with belief in one’s self-efficacy. Of the big-five personality dimensions,601 the inclination to act in a prosocial manner is primarily correlated with agreeableness—namely, the inclination to maintain positive relations with others, and to (p.108) act altruistically and cooperatively.602 Such traits, along with self-transcendence values (e.g., a recognition of the equal worth of all humans), and self-efficacy beliefs, significantly account for prosocial behavior.603 Prosocial orientation is predictive of involvement in sustained prosocial behavior in both ordinary life (such as volunteering), and extreme conditions (such as rescuing Jews in Nazi Europe).604 Prosocial behavior is negatively correlated with experiencing self-oriented discomfort when another person is in extreme distress.605
The inclination to help others is affected by one’s mood. Pleasant moods, whether induced or naturally occurring, increase helpfulness. People are more inclined to help others after successfully completing a task, when thinking happy thoughts, or even when experiencing sunny weather.606 The effect of negative moods on prosocial behavior is considerably more complex. Feelings of guilt generally induce prosocial behavior.607 However, the effect of sadness is inconsistent: while it may increase helping, more often than not it decreases prosocial behavior, or has no effect. A major explanation for the negative effect of sorrow on the inclination to help others is that sorrow leads to preoccupation with oneself and reduced concern for others.608
Not only does feeling good increase the likelihood of doing good, doing good usually results in feeling good. For this reason, it has been argued that seemingly altruistic behaviors are actually motivated by the egoistic desire to improve one’s mood and relieve negative feelings,609 or as a way to reduce the unpleasant, emphatic arousal generated by witnessing the suffering of other people.610 However, other studies, controlling for subjects’ expectation of improving their mood by helping others or providing alternative ways to attain that goal, have shown that prosocial behavior may also be motivated by empathy and altruism, rather than self-benefit.611 Notwithstanding these findings, studies have shown that people (p.109) engage in volunteer work for a multitude of motives, including a sense of commitment and idealism, a desire to meet new people, and an enhancement of one’s self-esteem.612
Another perspective on prosocial behavior underscores the role of social learning: people observe the behavior of others and emulate it.613 People tend to comply with social norms, such as the norm of reciprocity: the felt obligation to repay past favors by helping those who have helped us, and not helping those who have not.614 As one might expect, however, by following this norm people tend to make self-centered assessments, such that givers focus on the costs they incur, and recipients on the benefit they derive from what they receive.615 Equally unsurprising, salespersons, fundraisers, and contributors to political campaigns, among others, regularly take advantage of the entrenched norm of reciprocation.616
While helping is unidirectional, much prosocial behavior takes the form of bidirectional cooperation within, and even between, groups. In comparison to unilateral helping, cooperation characterizes interdependent relationships between similarly situated people, and often involves repeated interactions.617 Cooperation is necessary to overcome social dilemmas—that is, situations in which selfish behavior is rational, but when everybody behaves selfishly, everybody is worse off, compared to the situation in which everybody cooperates.618 The well-known prisoner’s dilemma game is a simple model of such a situation in a two-person scenario. The tragedy of the commons and public goods describe social dilemmas in multi-person scenarios.619 Since social dilemmas are commonly invoked by legal economists as a justification for various legal rules and institutions, behavioral studies of such dilemmas are particularly important for behavioral law and economics.
In keeping with rational choice theory, the prisoner’s dilemma, tragedy of the commons, and the problem of public goods assume that all people seek to maximize their own utility. (p.110) However, social psychologists have found that people’s motivations vary. According to a common, basic classification, people’s social-value orientation (SVO) is either individualistic, prosocial, or competitive. Individualists seek to maximize their lot regardless of the outcomes for others; prosocials prefer an equal distribution of resources and seek to maximize aggregate resources; and competitors seek to maximize their relative advantage over others.620 A meta-analysis of forty-seven studies using decomposed games found that 50 percent of people were classified as prosocials, 24 percent as individualists, and 13 percent as competitors—the remaining 13 percent displaying no consistent SOV.621 Another meta-analysis of eighty-two studies revealed that overall, prosocials cooperated in social dilemmas more than individualists, and individualists cooperated more than competitors.622
A huge body of research in experimental game theory has established that, rather than behaving as rational maximizers of their own utility, most people behave as reciprocators: they treat others as others treat them, and are willing to punish free-riders, even at some cost to themselves. This research also provides insight into the motivations underlying reciprocity—including inequality aversion, consideration of other people’s intentions in addition to their actions, and concern for overall social welfare (when it conflicts with inequality aversion). The research also highlights the importance of a threat of punishment in stabilizing cooperation over repeated interactions.623
To fully comprehend people’s cooperation, one must consider the dynamics of in-group and out-group relations, including the formation and effect of social identity. Delving into these issues would, however, exceed the scope of the present discussion.624
H. Cross-Phenomenal Factors
This section examines several issues that cut across various cognitive phenomena: individual differences, the effect of expertise and experience on judgment and decision-making, the possible differences between self-regarding decisions and decisions made on behalf of others, group decision-making, cultural differences, and possible reactions to the adverse effects of suboptimal decision-making. While some of these issues have been discussed sporadically above, this section discusses them from a broader and more methodical perspective.
Judgments and decisions depend on three types of factors: task features, environmental conditions, and personal characteristics. While the first two have been extensively studied from early on, considerably less attention has been given to the third. In this respect, JDM research lags behind other areas of psychological research.625 In recent years, considerable evidence has emerged about individual differences in judgment and decision-making, but there is still much room for systematization and theorization in this field.626
People differ in their judgments and decisions, including in terms of their inclination to use various heuristics, their vulnerability to cognitive biases, and their moral beliefs. This is evident in daily life, and has been manifested in thousands of experimental studies. Such individual differences do not imply that there are no predictable and systematic patterns of human judgment, motivation, and decision-making. As the numerous studies surveyed throughout this book have established, such patterns do exist. The variability between individuals in this respect nevertheless poses a challenge to policymakers, because it means that any single measure may have varying effects on different people: it may be beneficial for many, unnecessary for some, and even harmful for others. We will return to this point in Chapter 4.627 Here we only give a glimpse into studies that have sought to identify correlations between well-known heuristics and biases—and between intelligence, thinking dispositions, personality traits, and demographic variables, as well as between different heuristics and biases.628 These correlations provide some insight into the causes of the various phenomena, and into human reasoning in general.629
Keith Stanovich and Richard West have found a negative correlation between subjects’ scores on cognitive ability tests and their proneness to errors in probability assessments and syllogistic reasoning.630 Less obviously, they found weak, but statistically significant, correlation between low scores in cognitive ability tests and proneness to the hindsight bias and overconfidence. No such correlation was found, however, with the false-consensus effect.631 Likewise, no correlation was found between cognitive ability and people’s susceptibility to the conjunction (p.112) fallacy, base-rate neglect, certainty effect, framing effects, omission bias, sunk-costs, confirmation bias, anchoring, and other known cognitive biases.632
The relative independence of cognitive biases and cognitive ability is likely related to the attribution of many heuristics and biases to intuitive, System 1 thinking. According to a model put forward by Keith Stanovich, System 2 comprises two elements: reflective and algorithmic.633 The reflective mind determines whether System 1 thinking would be suppressed by algorithmic, System 2 thinking. While there is some correlation between the cognitive abilities measured by intelligence tests and people’s algorithmic abilities that are the focus of much JDM research, there is weaker correlation between one’s cognitive and algorithmic abilities and one’s tendency to engage in deliberative thinking. Numerous studies have demonstrated that measures of intelligence display only moderate to weak correlations with thinking dispositions (such as active open-minded thinking and need for cognition), and almost no correlations with others (such as conscientiousness, curiosity, and diligence).634 Hence, high cognitive ability does not necessarily translate into less susceptibility to cognitive biases.
This is not to say that a greater tendency to engage in deliberative thinking necessarily translates into lower susceptibility to cognitive biases. Here, too, the picture is not very clear. Several studies have examined the correlation between people’s score on the need for cognition scale (NCS)—a common test for the tendency to engage in effortful cognitive endeavors635—and framing effects. While the results are mixed, most studies found no such correlation.636 Similarly, Shane Frederick examined correlations between people’s score in the cognitive reflection test (CRT)—another measure of the inclination to use an analytic mode of thinking637—and several phenomena in judgment and decision-making.638 He found that people who are low on CRT (that is, inclined to more intuitive thinking) had higher discount rates. No correlation was found between CRT scores and self-perceived tendency to procrastinate.639 Subjects high on cognitive reflection were less risk-averse for gains and more risk-averse for losses, compared with subjects with low CRT scores. Thus, unlike the latter, the former did not display prospect theory’s reflection effect.640 Curiously, CRT scores were more tightly linked with time preferences for (p.113) women than for men, but were more tightly linked with risk preferences for men than for women.641
Notwithstanding the considerable progress that has been made in recent years, it appears that we still lack a comprehensive, satisfactory theory of the relationships between cognitive abilities, thinking dispositions, and susceptibility to cognitive biases. The only thing that can be said with some confidence at this point is that there are no strong correlations between susceptibility to cognitive bias and either cognitive ability or thinking dispositions.
Turning to another line of research, some studies have looked into the relationship between personality traits—especially the big-five personality dimensions642—and decision-making. For example, one study found that high scores on openness to experience—a trait consisting of active imagination, aesthetic sensitivity, attentiveness to inner feelings, preference for variety, and intellectual curiosity—were associated with greater risk-taking in the domain of gains. High scores on neuroticism—characterized by anxiety, fear, moodiness, worry, envy, frustration, jealousy, and loneliness—were associated with less risk-taking in the domain of gains, and more risk-taking in the domain of losses.643 Another study found that two facets of conscientiousness—striving for achievement and dutifulness—were correlated with escalation of commitment, but in opposite directions: whereas subjects who scored highly in achievement striving were more susceptible to this bias, those who scored highly in dutifulness were less so.644
Quite many studies have examined correlations between decision-making and demographic variables. Thus, for example, one large-scale study found that the myopic discount rate (measured by the subject’s choice between a smaller, hypothetical reward sooner and a larger one later) was weakly but significantly higher for respondents who were younger, less educated, and of lower incomes (although the causal relationship with the latter two variables is unclear).645 A meta-analysis of 150 studies found that women are generally more risk-averse than men, although the gender difference varies from one context to another.646 It was also found that there are significant differences in the magnitude of the gender gap across age levels—although this, too, varied from one context to another. Generally, the (p.114) evidence shows that women exhibit greater loss aversion than men,647 that older people tend to be more loss-averse than younger ones,648 and that higher education reduces (but does not eliminate) loss aversion.649 While some decision-making skills (such as applying decision rules) were found to diminish with old age—arguably due to decline in cognitive ability—others (such as consistency in risk perception) did not, and still others (such as resistance to overconfidence) were found to improve, arguably thanks to greater experience.650
Finally, another line of research has examined the correlations between different aspects of decision-making—particularly in relation to tasks where there is a normatively accurate (or consistent) decision. In general, these studies found statistically significant, positive correlations between the subjects’ resistance to different biases, but that these correlations were mostly weak.651 Some correlation has also been found between subjects’ aggregate score in batteries of decision tasks and their socioeconomic status—although, once again, correlation does not imply causality.652
Expertise is the possession of domain‐specific knowledge that is acquired through experience or training and that leads to sustainable superior performance in domain‐related tasks.653 Experts not only possess more information than laypersons; they also organize information into higher-level schemas that allow them to quickly perceive and recall domain-relevant information, recognize situations, and rapidly and accurately respond to them—without considering all of the available data or all conceivable options.654 As such, expertise effectively converts System 2 thinking (which may be crucial at the initial stages of acquiring the expertise) into heuristic-based, System 1 thinking.
While the findings are somewhat ambiguous, it appears that judgments can reflect true expertise if they are reached within a decision-making environment that: (1) is regular and predictable, and (2) offers people an opportunity to learn the relevant patterns.655 If the (p.115) outcomes of decisions are uncontrollable or unpredictable, the very notion of expert decision-making does not apply. But even if the first condition is met, often the second is not, since people do not get clear feedback whether they have made the right decision. One reason for the absence of feedback is that the outcomes of the forgone course of action may never become known, thus precluding meaningful comparison. For example, a manager may know how well the employees she has hired are doing, but she will never know how well those she has not hired would have done. Such asymmetric feedback likely distorts learning: a manager who rejects prospective recruits who do not meet an ill-conceived criterion may never find out that this is the case. Another reason for insufficient feedback is that the outcomes of one’s decision may be multidimensional, and hence not conducive to a clear assessment. Finally, in complex environments, the outcomes of a specific decision may only become known long after it is made, and the causal link between a decision and a certain outcome may be difficult to identify.
The common tendency to appeal to experts in all spheres of life reflects their obvious advantages. Experts produce more and better products and services, at a lower cost. For decision-making purposes, the use of experience- and training-based heuristics by experts is as crucial as the use of heuristics by laypersons in daily life. Moreover, experts often use strategies that replace or complement intuitive or “holistic” judgments with structured decision processes that employ linear models, multi-attribute utility analysis, and computer-based decision support systems.656 Such processes and systems can dramatically improve experts’ accuracy and consistency.
This is not to say that experts are immune to cognitive errors. Experts are particularly prone to two types of biases: schematic thinking and overconfidence. While the use of schemas is a hallmark of expertise, it may also lead to inattention to relevant information and to false recall of schema-relevant information.657 More importantly, the benefits of fast-and-frugal expert heuristics often come at a price of loss of flexibility, adaptation, and creativity.658 Such rigidity is particularly costly when tasks involve atypical characteristics or when circumstances change.
With regard to overconfidence, studies have shown that professionals are typically overly optimistic about the correctness of their judgments and decision-making.659 One adverse effect of such overconfidence is their underuse of decision aids that might improve decision-making.660 That said, the self-assessed correctness of some professionals, such as weather forecasters, was found to be well calibrated, plausibly thanks to the constant feedback they receive.661
(p.116) Beyond experts’ “occupational hazards” of schematic thinking and overconfidence, numerous behavioral studies have examined whether, and to what extent, expertise affects people’s susceptibility to various other cognitive biases. While the reported results are mixed, it is fair to say that experts are not generally immune to biases. For example, one of Tversky and Kahneman’s demonstrations of the law of small numbers was the erroneous answers given by participants at a meeting of the Mathematical Psychology Group of the American Psychology Association—people who are presumably experts in probability estimates.662 Similarly, the moral judgments of trained philosophers were as susceptible to order effects as those of laypersons.663 However, another study showed that the order effect was mitigated when tax professionals had control over the order in which they review the evidence within their sphere of expertise (but not outside it).664
Physicians have been found to be just as susceptible as laypeople to framing effects when choosing between alternative therapies—displaying risk aversion when outcomes were framed as gains, and risk-seeking when the same outcomes were framed as losses.665 In the same vein, Chicago Board of Trade traders were far more likely to take risks in the afternoon after morning losses than after morning gains.666
One empirical study found that professional investors exhibit a disposition effect—the tendency to sell stocks and other assets that appreciated in value sooner than those whose prices have declined—a phenomenon commonly associated with loss aversion and the anchoring effect.667 In contrast, a subsequent empirical study of the same phenomenon found that sophistication and trading experience eliminate the reluctance to realize losses—but do not entirely eliminate the propensity to realize gains.668 Other empirical studies of the behavior of professional investors found diminished—albeit not eliminated—aversion to losses.669 Other studies have demonstrated loss aversion and closely related phenomena among economics professors and lawyers.670
(p.117) Finally, the greater inclination to behave unethically to avoid losses than to obtain gains has been found in the behavior of both laypeople and professionals.671 Apropos of unethical behavior, in one experiment employees of a large bank behaved less honestly when their professional identity as bank employees was rendered salient—thus pointing to a causal connection between the two.672
In summary, it would be absurd to deny the superiority of experts over laypersons in making decisions in any number of spheres—from aircraft engineering to language editing. At the same time, experts are human beings, and as such are not immune to cognitive biases. Depending on the particular bias, context, and decision environment, experts sometimes overcome common biases, but on other occasions are equally, or even more, susceptible to them.
3. Deciding for Others
People often make decisions for others, or advise others how to decide. Parents decide for their children, policymakers set rules for the entire population, lawyers represent clients, and physicians answer patients’ questions such as: “What would you do if you were me?” As the latter two examples demonstrate, the self/other distinction sometimes coincides with the distinction between lay and professional decision-making—however, since professionals also decide for themselves and laypersons decide for others, the two situations merit separate discussion.673
Deciding for others raises two basic issues: motivational and cognitive. According to rational choice theory, people ultimately care only about their own interests. Hence, whenever a person advises others or decides on their behalf, there is an agency problem—that is, a concern that the agent would advance his or her own interests rather than those of the principal. The other issue is whether, and how, the heuristics and biases characterizing personal decisions affect decisions regarding others as well. The former issue is discussed elsewhere in the book;674 here we focus on the latter.
While research on this topic is still comparatively young, there is some evidence that making decisions on behalf of others can mitigate cognitive biases. Thus, in one study, participants were asked to imagine themselves either as a patient who must decide whether to undergo a certain medical treatment, as a parent faced with the same decision with regard to his or her child, or as a physician making recommendations to a patient (or establishing a general policy) about the treatment. The treatment is expected to eliminate a 10 percent risk of dying from a certain fatal disease, but carries a 5 percent risk of death from its own side effects. (p.118) It was found that changing the participants’ perspective changed their decision. Significantly more respondents opted for the treatment when making the decision for (or recommending the treatment to) other people, compared with deciding for themselves—thus overcoming the well-known omission bias.675 Likewise, it was found that when deciding for others, people overcome the status quo bias.676
In the same vein, although the findings are not unequivocal, it appears that when making decisions for others or advising others how to decide, people exhibit significantly less loss aversion than when they decide for themselves.677 Similarly, a series of survey and real-money exchange experiments have shown that when advisors evaluate entitlements on behalf of third parties, the WTA-WTP disparity is far smaller than when they act on their own behalf.678
In one study, physicians were asked to choose between two treatments for a fatal illness: one with higher prospects of survival but with a risk of unpleasant side effects (such as colostomy and chronic diarrhea), and the other with a lower survival rate but no risk of such effects. The physicians were significantly more inclined to choose the treatment with a lower survival rate for themselves than for others.679 Possibly, this is because they are better able to imagine their patients successfully adapting to a significant disability, and can imagine the attendant suffering only with regard to themselves.680 At any rate, this study demonstrates that it is not always clear which of the two decisions—the self- or other-regarding—is normatively superior.
It is sometimes difficult to determine whether self/other differences are due to variations in decision-making processes. For example, the fact that physicians recommend (p.119) routine medical examinations to their patients, but fail to undergo them themselves,681 may be due to the physicians’ procrastination and failure of self-control, rather than to self/other differences in decision-making. Similarly, the finding that psychiatrists prefer less invasive and less effective treatment for themselves than for their patients may be justified on the grounds that psychiatrists view themselves as more capable of taking action if the conservative treatment has been found to fail.682
True differences in self-versus-other decision-making may have various explanations. One hypothesis is that cognitive biases are often a product of System 1 thinking; System 1 thinking is considerably more influenced by emotions than System 2; and people are presumably more dispassionate when they make decisions for others than for themselves.683 However, some experimental results appear to contradict this claim.684 Relatedly, there is support for the notion that when people decide for others, they pay more attention to abstract considerations (such as an object’s desirability), whereas when deciding for themselves, they focus more on concrete considerations (such as the difficulties involved in attaining the object).685 According to construal level theory, this difference lies in the fact that there is greater psychological distance when deciding for others, and the greater the psychological distance, the more abstract people’s thinking tends to be.686
It has also been argued that, in self-regarding choices, people tend to give relatively equal weights to different considerations, whereas in other-regarding decisions they tend to focus on the most important consideration(s) and attribute lesser weight to secondary ones.687 One may, however, doubt the generality of this explanation, as many decisions are not multidimensional. As the study of self/other differences in decision-making proliferates, other accounts and additional mediating factors are constantly being offered.688
(p.120) In addition to the self/other distinction, there may be a difference between deciding for others and advising others how to decide. It was found that when people advise others, they tend to conduct a more balanced information search than when they decide for others.689 A follow-up experiment revealed that the confirmation bias characterizing self- and other-regarding decisions (but less so advice-giving), is eliminated in decisions concerning others when the decision-maker is not expected to communicate with the other person. Thus, the need to justify a decision (to oneself or to another person) appears to trigger the confirmation bias, and in the absence of this need—either because one is not making the decision but merely giving advice, or because one is not expected to communicate with the other person—the bias is reduced.690
4. Group Decision-Making and Advice-Taking
Many decisions are made by groups rather than by individuals. Examples include elections, decisions by boards of directors, and jury verdicts. Group decision-making varies immensely in terms of group characteristics (size, composition, internal hierarchy, relative expertise of members, etc.),691 decision procedures (e.g., the extent of information sharing and discussion prior to decision and the required majority),692 and the object of decision (e.g., whether the decision directly affects members’ interests, and if so, whether it refers to common goods or to private ones—and, if the latter, whether the group decision applies to all goods).693 Group decision-making is of great interest to various disciplines, including political science, public administration, social psychology, economic analysis, and law. This subsection focuses on decisions by small groups of people who interact with one another, from a JDM perspective.694 It also briefly discusses an intermediate phenomenon of advice-taking, which lies between individual and group decision-making.
Group decision-making may be warranted for non-instrumental reasons, such as fairness and democratic values. It may also be preferred when groups are expected to outperform individuals. It has been demonstrated that on some task-dimensions, such as information retrieval, groups perform better than an average individual; in some tasks (e.g., finding out a pattern by induction) they perform as well as the best of an equivalent (p.121) number of individuals; and in other tasks, such as letters-to-numbers problems (i.e., problems in which digits are coded as letters, and solvers are asked to identify the digit coded by each letter), they perform better than the best individual.695 However, along with such advantages, the characteristics of group information sharing, deliberation, and decision-making may also lead groups astray and result in suboptimal outcomes.696
Indeed, while many studies have examined the effect of group decision-making, no consistent conclusions have emerged. In some instances, the transition from individual to group decision-making mitigates divergences from expected utility theory—on other occasions it has no effect, and in still others it increases them.697 The disparate effects of group deliberation are not surprising given the great diversity of groups, their decision procedures, and goals. We shall focus on the impact of group decision-making on deviations from thin, cognitive rationality. One should mention, however, that group deliberation has been found to affect motivation as well. Specifically, studies of mixed-motives games, such as the prisoner’s dilemma, reveal that, compared with individuals, groups tend to behave less cooperatively vis-à-vis other groups and individuals.698
One factor affecting the success of group decision-making is the nature of the task at hand—in particular, its position on the spectrum between intellective and judgmental tasks.699 According to this typology, in intellective tasks there is a demonstrably correct answer within the relevant conceptual framework. In contrast, in judgmental tasks there is no generally accepted, demonstrably correct answer, as they involve contestable normative, aesthetic, or comparable aspects. For example, solving a mathematical or logical problem is typically very close to the intellective end of the spectrum, while choosing the best candidate for a job or determining the punitive damages in a lawsuit reside in the judgmental region. When the correct answer is easily demonstrable, and the group member or members who are able to find the answer have sufficient incentives to correct other members’ errors, groups are likely to come up with the correct answer. They are likely to do as well as the most competent member in the group. However, for the reasons discussed below, when it comes to judgmental tasks, group deliberation may actually exacerbate individual biases.700
A key reason for preferring group decision-making to individual decision-making is the possibility of tapping into a broader expertise and integrating more information. One challenge (p.122) facing any group is therefore to make optimal use of its members’ knowledge, and to assimilate as much of the available information as sensibly possible. However, a robust finding of many studies is the so-called common knowledge effect:701 information that is initially shared by all group members is much more likely to be brought up in deliberation and affect the final decision than unshared information—which compromises decisions' quality. The extent to which this unfortunate result ensues depends on several factors. One is the way group members perceive the process. There is evidence that groups whose members perceive the process as a negotiation between conflicting views in which each member seeks to prevail, rather than as a concerted attempt to reach an optimal decision, are likely to share less information and process it less thoroughly. Another factor is the discussion process itself: by its nature, shared information is more likely to be brought up, and repeatedly so, so it is likely to have a greater impact. Finally, the fact that certain information is shared by several or all members makes it sound more valid, and members are more likely to rely on it because it tends to evoke reassuring reactions from other members. Setting a common goal of reaching optimal outcomes (rather than an adversarial, negotiation-like process), and encouraging members to share information and to reserve judgment until all information is shared, may thus improve group decision-making.702
Another well-studied phenomenon is group polarization, which characterizes group judgmental tasks. This occurs when an initial tendency of individual group members in one direction is enhanced following group discussion. The two primary explanations for this phenomenon are social comparison and informational influences. According to the former, people strive to perceive themselves, and to be perceived by others, in a favorable light. Thus, when observing a general tendency in the group, they tend to adopt a position in the same direction, only more extreme. According to the latter explanation, when group members are initially inclined in one direction, the number and persuasiveness of arguments articulated in that direction during deliberation are greater than in the opposite direction, thus strengthening the initial tendency.703
Turning to specific phenomena in judgment and decision-making, unlike the bat-and-ball and similar questions, where the common error is easily demonstrable,704 many cognitive phenomena, such as loss aversion, cannot be characterized as erroneous or irrational per se. Hence they are not expected to disappear following group deliberation. Interestingly, even the conjunction fallacy and base-rate neglect do not necessarily disappear (and at times (p.123) are even exacerbated) when moving from individual to group decision-making—which indicates that they are not easily demonstrable errors.705
Some experimental studies of the effect of loss aversion and related phenomena have dealt specifically with group decision-making. These studies demonstrate that the tendencies to be risk-averse in the realm of gains, and risk-seeking in the realm of losses, do not disappear, but rather increase, when decisions are made by groups.706 When various group members frame the choice problem differently, the choice usually follows that of the majority.707 Group polarization has been found in experiments studying the effect of team deliberation on individuals’ endowment effect and status quo bias, as well. For instance, the gap between subjects’ willingness to sell a legal entitlement and their willingness to buy it—reflecting an endowment effect and a status quo bias—has been found to widen after deliberation in small groups of two to four members.708 While it is possible for group deliberation to reduce loss aversion, the available data indicates that it may actually magnify it. Support for this hypothesis was also found in a study of escalation of commitment in individual and group decision-making.709
Finally, studies of overoptimism in predicting the completion time of projects—the so-called planning fallacy—have shown that this bias is also exacerbated following group consultation.710 This is because group discussion increases members’ focus on factors that lead to positive forecasts.
Having discussed individual decision-making (throughout this chapter) and group decision-making (in this subsection), it should be noted that great many decision processes do not fit neatly into either of these two models. Specifically, many decisions are made by a single person after consulting with others.711 People seek advice to improve their decisions and (p.124) to share accountability, especially in organizational settings.712 In fact, most studies show that using advice does improve decisions.713
However, the central finding of JDM studies with regard to advice-taking is egocentric advice discounting: decision-makers systematically place greater weight on their own opinion relative to that of their advisors, and consequently make less accurate decisions than they would have made had they followed the received advice more closely.714 As might be expected, the more knowledgeable decision-makers are, and the greater the discrepancy between their own judgment and the advice they receive, the greater their tendency to discount the advice. While some advice discounting may be due to the fact that decision-makers have better access to their own reasons than to those of their advisors,715 or caused by insufficient adjustment of the decision-makers’ initial estimation,716 the main cause of advice discounting appears to be egocentrism.717
Possibly due to the sunk-costs effect, decision-makers tend to give more weight to advice that they have paid for than to that which they have received for free.718 Decision-makers also tend to use the confidence heuristic—assuming (often unwittingly) that the more confident the advisor is, the greater his or her expertise or accuracy.719 Clearly, then, advice-taking is no panacea.
5. Cultural Differences
While in the past JDM scholars have tended to assume that the phenomena they study are universal,720 recent years have seen a growing recognition of cross-cultural differences in judgment and decision-making.721 Nevertheless, this field of study is still underdeveloped, (p.125) and there is much more to be learned about the interplay between culture and decision-making. Understanding cultural differences is especially important for policymakers, as policies based on unwarranted generalizations may prove counterproductive.
Cross-cultural studies—many of which focus on East-Asian versus Western societies—have found significant differences in terms of people’s collectivist versus individualist orientation,722 interdependent versus independent self-construals,723 and holistic versus analytical cognitive styles.724 These differences sometimes affect people’s judgment and decision-making.
Thus, while both Americans and Chinese believe that Chinese are more risk-averse than Americans, experiments have shown that in the sphere of financial investments the opposite is true.725 At the same time, Chinese are more risk-averse in the medical and academic spheres. It appears that Chinese and Americans do not differ in their inherent risk attitude, but rather in the perceived riskiness of decisions in each domain. In collectivist societies, such as China, people expect to receive financial support from their extended family when they are in need. Accordingly, they are less risk-averse in the financial sphere—but not in others.
Cross-cultural differences have also been found with regard to intertemporal preferences—in particular, the tendency to excessively discount future gains and losses.726 Besides macro-level differences in saving rates between different nations, micro-level differences were found between people from different societies and different ethnic origins. Tellingly, subjects’ intertemporal preferences can be manipulated by priming techniques.727 In one study, Singaporean students were exposed to either Western or Singaporean cultural symbols, and then asked how much they would be willing to pay for an expedited, one-day delivery of a book, instead of the standard five-day delivery. As hypothesized, exposure to Western symbols resulted in greater impatience.728 In another study, the ethnic identity of American students was primed by getting them to fill out a background questionnaire about the languages spoken at home and the number of generations their families have lived in the United States. The participants were then asked to choose whether they would (p.126) prefer to receive a certain amount of money earlier, or a larger amount at a later date. It was found that, after the priming of their ethnic identity, participants of Asian descent were more likely to opt for the larger amounts at a later date.729
Unsurprisingly, cultural differences have been found with regard to phenomena associated with egocentrism—including overoptimism, overconfidence, and the tendency to attribute other people’s behavior to their personal attitudes, rather than to environmental influences (the fundamental attribution error).730 For example, in one study, Canadian participants showed significantly greater unrealistic optimism than their Japanese counterparts.731 In fact, a meta-analysis of ninety-one studies has revealed that within cultures, Westerners showed a clear tendency to think positively about themselves, while East Asians did not, with Asian Americans falling in-between.732 Relatedly, it was found that the endowment effect among East Asians was less pronounced than among Westerners.733 It was demonstrated experimentally that this difference may be influenced by the degree to which independence and self-enhancement (versus interdependence and self-criticism) are culturally valued. Cultural differences were apparent when self-object associations were made salient, but disappeared when such associations were minimized.734
Interestingly, several studies have shown that Chinese, Malaysian, and Indonesian participants—but not Japanese or Singaporeans—exhibited greater overconfidence than Westerners.735 It turned out that this result was mediated by the subjects’ ability to generate conflicting arguments regarding their answers—which suggests that it may have to do with the degree to which their respective education systems nurture critical thinking (which tends to reduce overconfidence).736
(p.127) Finally, cross-cultural studies, including studies using priming techniques, have demonstrated that Asians are considerably less prone to make the fundamental attribution error: they are considerably less likely to attribute other people’s behavior to their personal dispositions as opposed to situational factors.737
(a) Preliminary Comments
This subsection discusses techniques of improving judgment and decision-making in response to cognitive biases. To be sure, cognitive biases are not the only source of suboptimal decisions. For example, illiteracy and innumeracy are two major causes of poor decision-making, but they should be handled primarily by education, not by debiasing. Similarly, many suboptimal decisions are the product of misinformation. Inasmuch as providing people with accurate information overcomes information problems, there may be no need to change their JDM processes (although in practice, the borderline between cognitive biases and information problems is often blurred, and behavioral insights may contribute greatly to the design of disclosure duties).738
Debiasing should also be differentiated from insulation.739 Rather than trying to alter the cognitive processes leading to self-injurious or socially undesirable behaviors, some conducts are prohibited altogether, and some decisions are deemed ineffective. For example, the duty to wear seat belts while driving and the imposition of speed limits basically replace drivers’ decision-making with mandatory rules backed by legal sanctions. Such rules may play an educational role and alter people’s preferences and judgments, but in the main they change behavior not by improving people’s reasoning, but by increasing the costs associated with harmful conducts.740
Furthermore, debiasing stricto sensu should be distinguished from measures whose primary goal is not to change people’s judgments and decisions, but to replace them with those of other people. To the extent that professionals are better decision-makers than laypeople, and that when people make decisions for others they do a better job than when they do so for themselves, and that groups outperform individuals, entrusting decisions to professionals, agents, or groups may overcome cognitive biases. Of course, inasmuch as professionals, agents, and groups fall prey to their own biases, alternative or additional measures may be called for.741
Having delineated the concept of debiasing, it is important to note that debiasing assumes that something is wrong with the way that people reason otherwise. Indeed, phenomena such as the inverse fallacy and the gamblers’ fallacy lead to demonstrably erroneous decisions. However, other deviations from economic rationality or from consequentialist (p.128) morality, such as loss aversion and deontological morality, are arguably very sensible. Other things being equal, the more a judgment or a decision is patently wrong, the stronger the case for trying to correct it. When it comes to sensible judgments and decisions, not only is there no reason to debias them, but attempts to change them are also generally doomed to fail, since people often adhere to them even after careful deliberation. This means that debiasing—especially when initiated by the government—inevitably involves normative questions.742
Before we examine specific debiasing techniques, one last comment is in order. In principle, people can adopt their own debiasing strategies. However, debiasing often requires an external intervention. This is because, due to self-serving biases and blind spots, people are commonly unaware of their cognitive biases.743 External interventions may be initiated by friends and family, implemented in organizations, or carried out by governmental authorities. Such interventions raise a host of normative and pragmatic concerns, which exceed the scope of the present discussion.744 Here we focus on behavioral studies of debiasing, and the normative and policy issues of debiasing by the law will be discussed in Chapter 4.
Debiasing strategies may generally be classified as technological, motivational, or cognitive.745 Each of these three categories is discussed below.
(b) Technological Strategies
A straightforward “means of debiasing judgment is to take [judgment] out of the equation altogether, or rather, to replace it with an equation.”746 A famous example is the transformation of the market for professional baseball players in the United States following the adoption of evidence-based, rigorous statistical methods instead of the traditional methods, based on scouting, experience, and expert judgment.747 In general, technological debiasing strategies replace or complement intuitive or “holistic” judgments by structured decision processes involving linear models, multi-attribute utility analysis, computer-based decision-support systems, and the like.748
Thus, using statistical analysis software instead of relying on one’s intuitive assessments of probability would most likely improve one’s performance. Similarly, a checklist is a very simple—and often very effective—tool for overcoming forgetfulness, especially when decision tasks are complex and decision-makers may be tired or stressed. Using checklists (p.129) assures that all considerations are taken into account, and all relevant tests or actions are carried out.749
Linear models are a more sophisticated tool. Based on statistical analysis of the correlations between various attributes (e.g., publications record and teaching experience), and their values (e.g., number of publications), linear models provide a combined score for each alternative (e.g., each candidate for a given academic position), based on the weighted value of each attribute. It has long been demonstrated that decision-making based on empirically established relationships between data and a given dependent variable (in our example, success as an academic) is superior to discretionary, holistic decision-making, which may reflect any number of cognitive biases.750 More intriguingly, even linear models in which the weights of the attribute values cannot be based on reliable empirical data (and are therefore based on experts’ intuition, or even set to be equal) are superior to holistic judgments. At the very least, such models guarantee that all attributes are considered, and that the resulting conclusions are consistent.751
Using technological techniques to avoid irrational, intuitive decisions is not necessarily rational, however. The benefits of such techniques should always be weighed against their costs. If, for example, the adverse effects of a cognitive bias are rare or trivial, resorting to technological techniques may be unwarranted.752 However, there is evidence that even professionals underutilize cost-effective decision-support systems. Two explanations for this regrettable phenomenon are overconfidence,753 and the concern that professionals who use such systems are viewed by others as less competent.754 In the absence of clear feedback on the quality of their decisions, professionals who have a blind spot regarding their fallibilities may never learn the importance of using decision aids.
(c) Motivational Strategies: Incentives and Accountability
Motivational techniques focus on increasing the motivation to perform well. This may be done by providing incentives to overcome decision errors, and by asking people to provide other people with reasons for their decisions (accountability).
In light of the central role that incentives play in standard economic analysis (and as a corollary, the different conventions concerning the use of incentives in experimental economics and experimental psychology), the controversy over the effectiveness of incentives (p.130) in eliminating cognitive biases is hardly surprising.755 As a matter of fact, incentives have been proven useful in overcoming issues of procrastination and bounded willpower—cases where people know what they want to achieve, but fail to exercise the self-control necessary to achieving it. Thus, some studies have demonstrated that large financial incentives, in the range of hundreds of dollars, were effective in inducing obese people to lose weight,756 helping smokers to quit smoking,757 and forming a habit of gym exercise.758 The evidence, however, is far from conclusive.759 Incentives may be provided externally, but people can also create incentives for themselves through various commitment devices.760
Beyond issues of bounded willpower, incentives are useful when more effort produces better results, as when the quality of a decision depends on the time and energy one puts into acquiring and systematically processing information.761 Indeed, based on a review of seventy-four studies, Colin Camerer and Robin Hogarth have concluded that incentives improve recalling remembered items, resolving easy problems, making certain predictions, and the like.762
Incentives are considerably less effective, or even counterproductive, when cognitive biases are not primarily due to insufficient effort.763 Thus, for example, neither framing effects nor the hindsight bias have been eliminated when experiments involved real payoffs.764 Evidence regarding the effect of incentives on the anchoring effect is mixed.765 Incentives to (p.131) give the right answer do not eliminate overconfidence.766 Greater effort may result in greater confidence in one’s judgments, even if it does not actually improve accuracy.767 In one study, incentives strengthened preference reversals;768 in another, incentivized subjects were less likely to follow a reliable decision rule, and more likely to change their decision strategy after incorrect judgments—resulting in worse performance than non-incentivized subjects.769 Given the complex relationships between the various factors affecting judgment and decision-making, where motivation is but one factor, the varying effects of incentives should not be surprising.
Even when incentives have a beneficial effect, calibrating the optimal incentive may not be an easy task: when the external incentive is too small, its negative effect as a result of crowding out intrinsic motivation may surpass its positive effect;770 and when it is too big, it may result in over-motivation and lesser performance.771 In line with the former observation, incentives have also been found to reduce cooperation when they adversely affect trust, and reduce prosocial behavior when they transform a social framing into a monetary one.772 The same is true of ethical behavior, where implementing mild measures of detecting and sanctioning improper behavior may result in a reframing of the situation as involving risky costs (probability of detection and sanction levels) and benefits (from unethical behavior) to the actor, instead of an ethical dilemma.773
Finally, behavioral insights can help design effective incentives. Roland Fryer and his colleagues conducted a randomized field experiment to test the effect of financial incentives for teachers on their students’ achievements. While teachers in the control group were promised a considerable monetary award for a certain target increase in students’ performance, teachers in the treatment group were paid the same award in advance and asked to return the money if their students had not improved to the same degree. Reframing the failure to attain the desirable increase in students’ performance as a loss, rather than as an unobtained gain, had a dramatic effect. While no significant improvement was found in (p.132) the control group, the improvement in the treatment group was equivalent to increasing teacher quality by more than one standard deviation.774
Similarly to incentives, accountability—the need to justify one’s decisions to others—has mixed effects on people’s judgments and choices.775 At the outset, it should be noted that accountability is much more than just a debiasing technique: since the costs of formal means of assuring people’s compliance with social norms are usually prohibitive, internalized accountability is a powerful form of social control.776 Hence, accountability not only strengthens the motivation to make the right decision; it may alter “the right decision,” as well. Inasmuch as social approval affects one’s well-being, factoring such approval into one’s decision may turn an otherwise improper decision into a proper one, and vice versa.777 For example, it was found that accountability amplifies the effect of the status quo and omission biases in choices that involve a risk of imposing losses on identifiable constituencies.778 Arguably, this is a perfectly rational consideration from the decision-maker’s perspective.
To understand the effect of accountability on people’s judgment and decision-making, several distinctions must be drawn. One concerns timing: when a person is asked to justify a decision she has already made, accountability cannot affect the decision. However, accountability is consequential when a person makes an initial decision and then faces additional ones. Since people dislike admitting that their initial decision was wrong, accountability may enhance the confirmation bias and exacerbate the escalation of commitment.779 This is not the case with accountability for an initial decision.
Another relevant distinction is between outcome accountability and process accountability. While accountability for the quality of outcomes tends to produce greater escalation of commitment, accountability for the procedure in which a decision is made tends to reduce escalation of commitment, increase consistency in applying a judgment strategy, and encourage the consideration of more information in a more analytical (p.133) manner.780 Possibly, the adverse effects of outcome accountability are due to decision stress and narrowing of attention.
Finally, one should distinguish between instances where the views of the person(s) that the decision-maker is accountable to are known, and those where they are not. Due to the conformity effect, when the audience’s views are known, accountability is likely to induce people to shift their judgments and decisions closer to those of the audience.781 This may result in a biased judgment or decision. Whether accountability transforms the way people think, or merely affects what they say they think—in this and other cases—may differ from one setting to another (and the very distinction between the two is sometimes unclear).782 In contrast, accountability to an audience whose views are unknown is much more likely to result in a thorough consideration of more information and conflicting arguments, so as to prepare the decision-maker for possible objections—a process that helps overcome cognitive biases.783
Having reviewed the research on these and other factors, Jennifer Lerner and Philip Tetlock concluded: “Self-critical and effortful thinking is most likely to be activated when decision makers learn prior to forming any opinions that they will be accountable to an audience (a) whose views are unknown, (b) who is interested in accuracy, (c) who is interested in processes rather than specific outcomes, (d) who is reasonably well informed, and (e) who has a legitimate reason for inquiring into the reasons behind participants’ judgments.”784
However, even when all these conditions are met, the beneficial effect of accountability is not guaranteed. Accountability is particularly beneficial when erroneous judgments and decisions are the product of lack of effort. To come up with a defensible decision, accountable people are more likely to thoroughly and self-critically consider more information and conflicting arguments. Accordingly, accountability has been found to reduce the order effect and anchoring effect, and to increase the calibration between the accuracy of the decision-maker’s decisions and his or her confidence.785 In contrast, when a person lacks the intellectual tools or knowledge necessary to overcome cognitive errors, more effort is unlikely to improve one’s judgments. Indeed, accountability had no effect on base-rate neglect or insensitivity to sample size.786
(p.134) Moreover, sometimes accountability makes things worse. This is expected to be the case when the easier-to-justify decision is the normatively inferior one. There is some evidence that accountability amplifies the compromise and attraction effects because choices reflecting these biases are easier to justify.787 For similar reasons, an effort to take into account all available information, induced by accountability, may result in worse decisions when the additional information is non-diagnostic, and preferably should have been ignored.788
In summary, while incentives and accountability sometimes help overcoming cognitive biases, they are certainly no panacea. Depending on the circumstances, they may fail to improve judgments and choices, and even make things worse.
(d) Cognitive Strategies
Having discussed technological debiasing techniques (which may be too costly for everyday decision-making) and motivational ones (which are often ineffective), we turn to cognitive techniques. Cognitive strategies for debiasing aim to help people overcome their cognitive biases, or to modify the decision environment so that people’s ordinary cognitive processes would bring about better judgments and decisions. Thus, a distinction may be drawn between direct and indirect debiasing techniques. While the former aim to help people make more rational (or at least more consistent) decisions, the latter strive to counteract the effect of some biases by triggering other biases.789 Examples of direct debiasing techniques include drawing the decision-maker’s attention to the existence of the bias, asking people to think about alternative possibilities or perspectives, and training in probabilistic reasoning. Examples of indirect debiasing include the setting of a beneficial default, for example, for participation in a pension saving plan (thus using the omission bias to counteract people’s myopia and bounded willpower), and rearranging the food display in a cafeteria to encourage the consumption of healthy food. Whereas direct debiasing invokes System 2 thinking to correct System 1’s biases, indirect debiasing utilizes System 1’s biases to counteract other biases. Therefore, indirect debiasing does not require that the decision-maker would be cognizant of his or her bias. In fact, it may be more effective when the decision-maker is unaware of the cognitive processes involved. Indirect-debiasing measures are closely connected to the notions of nudges and libertarian paternalism; hence they are discussed in Chapter 4.790 Here we review some of the findings on direct debiasing techniques.
(p.135) The most straightforward reaction to cognitive biases is to draw decision-makers’ attention to their existence. However, the evidence regarding the success of such alerting is mixed. For example, drawing subjects’ attention to the “I knew it all along” effect,791 and asking them to do their best to overcome it, failed to achieve this result.792 Other studies on the effectiveness of warnings to undo the hindsight bias produced mixed results.793 Warning subjects that their assessments were too close to a given anchor (e.g., within ±10 percent or within ±20 percent of it) similarly failed to debias the anchoring and adjustment effect.794 In contrast, warnings successfully debiased a framing effect when subjects’ level of involvement in the decision was high, and very strong warnings debiased this effect even when subjects’ involvement was low.795
Another technique, which has proven more effective than simple warnings, is asking people to consider evidence or arguments that might lead to a different conclusion (consider the opposite), or to generate additional alternatives to choose from.796 Since many biases are the outcome of System 1, associative thinking, rather than a systematic consideration of arguments and possibilities, this simple technique induces people to employ a more analytical mode of thinking. For example, in one study, neuropsychologists were presented with an ambiguous case history, and asked to estimate the probability of three possible diagnoses. In the hindsight conditions, subjects were informed that one diagnosis was correct, and asked what probability they would have assigned to each diagnosis if they were making the original diagnosis. Some of the subjects, in both the foresight and hindsight conditions, were first asked to list one reason why each of the possible diagnoses might be correct. Answering this question reduced the percentage of subjects exhibiting the hindsight bias from 58 percent to 41 percent.797 A more recent study concerned the discrepancy between the economic imperative to consider any possible purchase vis-à-vis alternative uses of one’s money (its opportunity costs), and WYSIATI—the tendency to base one’s decision on immediately available information to the exclusion of all other information. It was found that, in a choice between an expensive and a cheaper product, merely adding a reminder of the possibility of using the price-difference for another purpose (e.g., “leaving you with $X dollars (p.136) to spend on something else”) considerably increased the incidence of selecting the cheaper product.798 Consider-the-opposite strategy has also been found effective in reducing the anchoring effect,799 overconfidence,800 self-serving biases,801 and more.802 However, these techniques are not always effective803—and even when they are, they usually only reduce cognitive biases, and not eliminate them.
Another debiasing technique that rests on inducing people to consider more information and additional perspectives is to ask them to make the same judgment or estimation twice—possibly with a time delay between the two assessments, or by using different thinking modes—and then average the responses.804
An additional set of debiasing techniques is founded on training people to use adequate decision rules (instead of intuitive heuristics).805 For instance, people’s probabilistic assessments may be improved by studying statistics,806 and cost-benefit analysis may be improved by studying economics.807 However, there is conflicting evidence about the extent to which learning to use the relevant rules in one domain extends to other domains, as well as the extent to which such training has a lasting effect on people’s judgment and decision-making.808 People may not pause to use the rules they know—especially when their intuitive judgments are strong. In any event, very few people actually get such training.
A potentially fruitful approach is to design a debiasing device based on the variables that mediate cognitive biases.809 Thus, the finding that people make better judgments when thinking in terms of frequency rather than probability (e.g., “1 in 20,” rather than “5 percent”), has led to a training program in which subjects were taught to make probabilistic inferences by (p.137) constructing frequency representations.810 It was found that teaching people to represent information in that way was more effective than teaching them Bayesian rules. While both types of training produced substantial short-term improvement, training in frequency representations was more effective in the short run and considerably more effective in the long run.
In general, debiasing techniques are likely to be more effective when there is an easily demonstrable “right answer” than when decisions depend on attitudes to risks, losses, discount rates of future outcomes, and the like. Accordingly, while the findings are far from conclusive, there is some support for the claim that framing effects can be more easily debiased than, for example, the sunk-costs effect. It has been demonstrated experimentally that framing effects can be eliminated by giving appropriate warnings,811 by asking people to list the advantages and disadvantages of each option and the rationale for their decision,812 and by instructions to specifically analyze each piece of evidence.813 In contrast, attempts at debiasing the sunk-costs effect produced mixed results. While there is evidence that studying economics—including being exposed to the notion of sunk costs—does not affect the sunk-costs effect,814 it was found that economics professors are less vulnerable to this effect than professors of other disciplines.815 Instructing subjects to outline the pros and cons of each option prior to reaching a decision did not affect escalation of commitment.816
In sum, the vast literature on debiasing provides no clear, general conclusion. Some debiasing techniques are more effective than others, efficacy varies from one context to another, and some strategies are actually counterproductive. Moreover, just as it has been argued that some of the biases identified in laboratory experiments may disappear in real-life contexts, it may well be that debiasing measures that have been proven effective in the laboratory would not be as effective in the real world, if at all.817 Some of the abovementioned (p.138) techniques may be used by individuals, some could be employed by organizations, and some could be implemented by the government. Implementing those techniques (and all the more so, indirect debiasing techniques and nudges) by the government raises a host of normative and policy questions that will be discussed in Chapter 4.818
I. Concluding Remarks
This chapter offered a bird’s-eye view of the psychological studies informing the behavioral-economic analyses of law. It focused on JDM research, but integrated the findings of other areas in psychology and touched upon other disciplines, such as experimental economics.
The numerous references to specific studies, meta-analyses, and reviews of the psychological literature, provided throughout the chapter, should enable the interested reader to get a fuller and more nuanced picture of the pertinent issues. Since the psychological research is constantly developing, to get an up-to-date glance at current research, one would have to read the very recent studies, possibly by looking at studies that cite the articles and books cited in this chapter.
While there are still many unknowns in people’s motivation, judgment, and decision-making, so much is known that it would seem pointless to ignore this huge body of knowledge and stick to unrealistic, abstract models of human behavior. Against this backdrop, Part II will provide a general synopsis of behavioral law and economics, and the ensuing parts will discuss specific legal fields.
(1.) JOHN VON NEUMANN & OSKAR MORGENSTERN, THEORY OF GAMES AND ECONOMIC BEHAVIOR (2d ed. 1947).
(2.) On the intellectual roots of JDM research and its development, see WILLIAM M. GOLDSTEIN & ROBIN M. HOGARTH, RESEARCH ON JUDGMENT AND DECISION MAKING: CURRENTS, CONNECTIONS, AND CONTROVERSIES 3–65 (1997); Ulrike Hahn & Adam J.L. Harris, What Does It Mean to Be Biased: Motivated Reasoning and Rationality, 61 PSYCHOL. LEARNING & MOTIVATION 42 (2014); Gideon Keren & George Wu, A Bird’s-Eye View of the History of Judgment and Decision Making, in 1 THE WILEY BLACKWELL HANDBOOK OF JUDGMENT AND DECISION MAKING 1 (Gideon Keren & George Wu eds., 2015).
(3.) On the connections between JDM and social psychology, see Thomas D. Gilovich & Dale W. Griffin, Judgment and Decision Making, in 1 HANDBOOK OF SOCIAL PSYCHOLOGY 542 (Susan T. Fiske, Daniel T. Gilbert & Gardner Lindzey eds., 5th ed. 2010).
(4.) See, e.g., Alan G. Sanfey & Mirre Stallen, Neurosciences Contribution to Judgment and Decision Making: Opportunities and Limitations, in WILEY BLACKWELL HANDBOOK, supra note 2, at 268; HANDBOOK OF NEUROSCIENCE FOR THE BEHAVIORAL SCIENCES (Gary G. Berntson & John T. Cacioppo eds., 2009).
(5.) See, e.g., ADVANCES IN BEHAVIORAL ECONOMICS (Colin F. Camerer, George Loewenstein & Matthew Rabin eds., 2003); Nicholas C. Barberis, Thirty Years of Prospect Theory in Economics: A Review and Assessment, 27 J. ECON. PERSP. 173 (2013).
(6.) See, e.g., Nicholas C. Barberis & Richard H. Thaler, A Survey of Behavioral Finance, in 1B HANDBOOK OF THE ECONOMICS OF FINANCE 1053 (George M. Constantinides, René M. Stulz & Milton Harris eds., 2003); HANDBOOK OF BEHAVIORAL FINANCE (Brian Bruce ed., 2010).
(7.) See, e.g., THE OXFORD HANDBOOK OF POLITICAL PSYCHOLOGY (Leonie Huddy, David O. Sears & Jack S. Levy eds., 2013). For an application of JDM insights to political philosophy, see, e.g., JAMIE TERENCE KELLEY, FRAMING DEMOCRACY: A BEHAVIORAL APPROACH TO DEMOCRATIC THEORY (2012).
(8.) See, e.g., BEHAVIORAL LAW AND ECONOMICS (Cass R. Sunstein ed., 2000); THE OXFORD HANDBOOK OF BEHAVIORAL ECONOMICS AND THE LAW (Eyal Zamir & Doron Teichman eds., 2014).
(9.) THE HANDBOOK OF EXPERIMENTAL ECONOMICS RESULTS (Charles R. Plott & Vernon L. Smith eds., 2008); Christoph Engel, Behavioral Law and Economics: Empirical Methods, in THE OXFORD HANDBOOK OF BEHAVIORAL ECONOMICS AND THE LAW, supra note 8, at 125; EXPERIMENTAL PHILOSOPHY (Joshua Knobe & Shaun Nichols eds.) Vols. 1 (2008) & 2 (2014).
(10.) On cognitive and motivational rationality, see supra pp. 8–12.
(11.) See infra pp. 101–10.
(12.) See infra pp. 94–97 and 72–76, respectively.
(13.) See generally DUAL-PROCESS THEORIES IN SOCIAL PSYCHOLOGY (Shelly Chaiken & Yaacov Trope eds., 1999).
(14.) Keith E. Stanovich & Richard F. West, Individual Differences in Reasoning: Implications for the Rationality Debate?, 23 BEHAV. & BRAIN SCI. 645 (2000); DANIEL KAHNEMAN, THINKING, FAST AND SLOW (2011). See also Daniel Kahneman & Shane Frederick, Representativeness Revisited: Attribute Substitution in Intuitive Judgment, in HEURISTICS AND BIASES: THE PSYCHOLOGY OF INTUITIVE JUDGMENT 49 (Thomas Gilovich, Dale Griffin & Daniel Kahneman eds., 2002); Jonathan St. B.T. Evans, Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition, 59 ANN. REV. PSYCHOL. 255 (2008).
(15.) See, e.g., Seymour Epstein, Integration of the Cognitive and the Psychodynamic Unconscious, 49 AM. PSYCHOLOGIST 709 (1994).
(16.) See, e.g., ANTONIO R. DAMASIO, DESCARTES’ ERROR: EMOTION, REASON, AND THE HUMAN BRAIN (1994); Jonathan Haidt, The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment, 108 PSYCHOL. REV. 814 (2001). For recent overviews, see Dacher Keltner & E.J. Horberg, Emotion-Cognition Interactions, in APA HANDBOOK OF PERSONALITY AND SOCIAL PSYCHOLOGY, VOL. 1: ATTITUDES AND SOCIAL COGNITION 623, 637–52 (Mario Mikulincer et al. eds., 2015); Jennifer S. Lerner et al., Emotions and Decision Making, 66 ANN. REV. PSYCHOL. 799 (2015). See also infra pp. 44–45, 100.
(17.) See, e.g., Vinod Goel & Raymond J. Dolan, Explaining Modulation of Reasoning by Belief, 87 COGNITION B11 (2003); Matthew D. Lieberman, Social Cognitive Neuroscience: A Review of Core Processes, 58 ANN. REV. PSYCHOL. 259 (2007).
(19.) Jonathan St. B.T. Evans, The Heuristic-Analytic Theory of Reasoning: Extension and Evaluation, 13 PSYCHONOMIC BULL. & REV. 378 (2006); KEITH E. STANOVICH, RATIONALITY AND THE REFLECTIVE MIND 19–22 (2011).
(20.) Valerie Thompson, Dual-Process Theories: A Metacognitive Perspective, in IN TWO MINDS: DUAL PROCESSES AND BEYOND 171 (Jonathan St. B.T. Evans & Keith Frankish eds., 2009). See also Emmanuel Trouche et al., The Selective Laziness of Reasoning, 40 COGNITIVE SCI. 2122 (2016).
(21.) STANOVICH, supra note 19. For a shorter exposition, see Keith E. Stanovich, On the Distinction between Rationality and Intelligence: Implications for Understanding Individual Differences in Reasoning, in THE OXFORD HANDBOOK OF THINKING AND REASONING 343 (Keith J. Holyoak & Robert G. Morrison, Jr. eds., 2012).
(22.) Thus, both systems are often involved in a single decision, and there may be a continuum, rather than a dichotomy, between the automatic and deliberative modes of thinking. See Magda Osman, An Evaluation of Dual-Process Theories of Reasoning, 11 PSYCHONOMIC BULL. & REV. 988 (2004).
(23.) Shane Frederick, Cognitive Reflection and Decision Making, 19 J. ECON. PERSP. 25 (2005).
(24.) Maggie E. Toplak, Richard F. West & Keith E. Stanovich, Assessing Miserly Information Processing: An Expansion of the Cognitive Reflection Test, 20 THINKING & REASONING 147 (2014).
(25.) John T. Cacioppo & Richard E. Petty, The Need for Cognition, 42 J. PERSONALITY & SOC. PSYCHOL. 116 (1982); John T. Cacioppo, Richard E. Petty & Chuan Feng Kao, The Efficient Assessment of Need for Cognition, 48 J. PERSONALITY ASSESSMENT 306 (1984).
(26.) See, e.g., Anastasiya Pocheptsova et al., Deciding without Resources: Resource Depletion and Choice in Context, 46 J. MARKETING RES. 344 (2009).
(27.) Gerd Gigerenzer & Daniel G. Goldstein, Reasoning the Fast and Frugal Way: Models of Bounded Rationality, 103 PSYCHOL. REV. 650 (1996); Kahneman & Fredrick, supra note 14, at 59–60. See also BETTER THAN CONSCIOUS? DECISION MAKING, THE HUMAN MIND, AND IMPLICATIONS FOR INSTITUTIONS (Christoph Engel & Wolf Singer eds., 2008).
(28.) See infra pp. 25–26.
(29.) Daniel Kahneman & Shane Frederick, A Model of Heuristic Judgment, in THE CAMBRIDGE HANDBOOK OF THINKING AND REASONING 267, 287 (Keith J. Holyoak & Robert G. Morrison eds., 2005).
(30.) See also infra pp. 34–36.
(31.) See infra pp. 28–30.
(32.) Fritz Strack, Leonard L. Martin & Norbert Schwarz, Priming and Communication: The Social Determinants of Information Use in Judgments of Life Satisfaction, 18 EUR. J. SOC. PSYCHOL. 429 (1988); Kahneman & Frederick, supra note 29, at 269.
(33.) Anuj K. Shah & Daniel M. Oppenheimer, Heuristics Made Easy: An Effort-Reduction Framework, 134 PSYCHOL. BULL. 207 (2008).
(37.) Eyal Zamir, Ilana Ritov & Doron Teichman, Seeing Is Believing: The Anti-Inference Bias, 89 IND. L.J. 195 (2014).
(38.) Amos Tversky & Daniel Kahneman, Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment, 90 PSYCHOL. REV. 293, 313 (1983).
(39.) See, e.g., GERD GIGERENZER, PETER TODD & THE ABC RESEARCH GROUP, SIMPLE HEURISTICS THAT MAKE US SMART (1999).
(40.) Peter C. Wason, Reasoning, in 1 NEW HORIZONS IN PSYCHOLOGY 135, 145–46 (Brian M. Foss ed., 1966). On the confirmation bias, see infra pp. 58–61.
(41.) Gerd Gigerenzer & Klaus Hug, Domain-Specific Reasoning: Social Contracts, Cheating and Perspective Change, 42 COGNITION 127 (1992). In the same vein, it was found that people’s probability inferences are more accurate when probabilities are presented in frequency formats (e.g., 1 in 20) than in percentages (e.g., 5 percent). See, e.g., Gerd Gigerenzer & Ulrich Hoffrage, How to Improve Bayesian Reasoning without Instruction: Frequency Formats, 102 PSYCHOL. REV. 684 (1995). Note, however, that nowadays, probabilities are more often presented in real life in percentage than in frequency formats.
(42.) This claim echoes Herbert Simon’s claim that due to their limitations, people often act as “satisficers”—rather than maximizers—of their utility, and sensibly so. See, e.g., HERBERT A. SIMON, ADMINISTRATIVE BEHAVIOR: A STUDY OF DECISION-MAKING PROCESSES IN ADMINISTRATIVE ORGANIZATION (1947; 4th ed. 1997); Herbert Simon, A Behavioral Model of Rational Choice, 69 Q.J. ECON. (1955).
(43.) See, e.g., Daniel G. Goldstein & Gerd Gigerenzer, Models of Ecological Rationality: The Recognition Heuristic, 109 PSYCHOL. REV. 75 (2002).
(44.) Gerd Gigerenzer & Wolfgang Gaissmaier, Heuristic Decision Making, 62 ANN. REV. PSYCHOL. 451, 455 (2011).
(45.) For a comprehensive analysis of the debate between the heuristics-and-biases and the fast-and-frugal-heuristics schools, see MARK KELMAN, THE HEURISTICS DEBATE 19–116 (2011). For concise descriptions, see, e.g., Hahn & Harris, supra note 2, at 49–53; Jonathan Baron, supra note 34, at 11–14.
(47.) See, e.g., JONATHAN BARON, THINKING AND DECIDING 54 (4th ed. 2008).
(48.) Another possible cause is the diversity of disciplines to which researchers in the field belong, including psychology, economics, marketing, finance, and law. See Gilovich & Griffin, supra note 3, at 542.
(49.) Joachim I. Krueger & David C. Funder, Towards a Balanced Social Psychology: Causes, Consequences, and Cures for the Problem-Seeking Approach to Social Behavior and Cognition, 27 BEHAV. & BRAIN SCI. 313 (2004) (the article is followed by thirty-five critical comments and the authors’ reaction. See id. at 328–67); Elke U. Weber & Eric J. Johnson, Mindful Judgment and Decision Making, 60 ANN. REV. PSYCHOL. 53 (2009).
(51.) See also infra pp. 152–54.
(54.) Amos Tversky & Daniel Kahneman, Extension versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment, 90 PSYCHOL. REV. 293, 297 (1983).
(57.) Daniel Kahneman & Amos Tversky, Subjective Probability: A Judgment of Representativeness, 3 COGNITIVE PSYCHOL. 430, 431 (1972).
(58.) On the Fast-and-Frugal school, see supra pp. 25–26.
(59.) See, e.g., Ralph Hertwig & Gerd Gigerenzer, The “Conjunction Fallacy” Revisited: How Intelligent Inferences Look Like Reasoning Errors, 12 J. BEHAV. DECISION MAKING 275 (1999).
(60.) Barbara Mellers, Ralph Hertwig & Daniel Kahneman, Do Frequency Representations Eliminate Conjunction Effects? An Exercise in Adversarial Collaboration, 12 PSYCHOL. SCI. 269 (2001).
(62.) Maya Bar-Hillel & Efrat Neter, How Alike Is It versus How Likely Is It: A Disjunction Fallacy in Probability Judgments, 65 J. PERSONALITY & SOC. PSYCHOL. 1119 (1993).
(63.) Maya Bar-Hillel, The Base-Rate Fallacy in Probability Judgments, 44 ACTA PSYCHOLOGICA 211 (1980).
(64.) RICHARD NISBETT & LEE ROSS, HUMAN INFERENCE: STRATEGIES AND SHORTCOMINGS OF SOCIAL JUDGMENT 147–50 (1980).
(65.) Daniel Kahneman & Amos Tversky, On the Psychology of Prediction, 80 PSYCHOL. REV. 237 (1973).
(66.) Jonathan J. Koehler, The Base Rate Fallacy Reconsidered: Descriptive, Normative, and Methodological Challenges, 19 BEHAV. & BRAIN SCI. 1 (1996).
(68.) See, e.g., Baruch Fischhoff, Paul Slovic & Sarah Lichtenstein, Subjective Sensitivity Analysis, 23 ORG. BEHAV. & HUM. PERFORMANCE 339 (1979).
(69.) Leda Cosmides & John Tooby, Are Humans Good Intuitive Statisticians After All? Rethinking Some Conclusions from the Literature on Judgment under Uncertainty, 58 COGNITION 1 (1996).
(70.) Kohler, supra note 66, at 12–13. In real life, including in the legal arena, objective data about the base rate is often unavailable. This means that even decision-makers who pay attention to the base rate may come to wrong conclusions if their subjective assessment of the base rate is inaccurate. See Michael J. Saks & Michael Risinger, The Presumption of Guilt, Admissibility Rulings, and Erroneous Convictions, 2003 MICH. ST. DCL L. REV. 1051.
(71.) Kahneman & Tversky, supra note 65; Zvi Ginossar & Yaacov Trope, The Effects of Base Rates and Individuating Information on Judgments about Another Person, 16 J. EXPERIMENTAL SOC. PSYCHOL. 228 (1980).
(72.) Lívia Markóczy & Jeffrey Goldberg, Women and Taxis and Dangerous Judgments: Content Sensitive Use of Base-Rate Information, 19 MANAGERIAL & DECISION ECON. 481 (1998).
(74.) SCOTT PLOUS, THE PSYCHOLOGY OF JUDGMENT AND DECISIONMAKING 131–34 (1993); Gaëlle Villejoubert & David R. Mandel, The Inverse Fallacy: An Account of Deviations from Bayes’s Theorem and the Additivity Principle, 30 MEMORY & COGNITION 171 (2002).
(75.) Jeffrey J. Rachlinski, Heuristics and Biases in the Courts: Ignorance or Adaptation?, 79 OR. L. REV. 61, 82–85 (2000).
(77.) See Ward Casscells, Arno Schoenberger & Thomas B Graboys, Interpretation by Physicians of Clinical Laboratory Results, 299 NEW ENG. J. MED. 999 (1978); David M. Eddy, Probabilistic Reasoning in Clinical Medicine: Problems and Opportunities, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES 249 (Daniel Kahneman, Paul Slovic & Amos Tversky eds., 1982). See also infra p. 581.
(78.) Amos Tversky & Daniel Kahneman, Belief in the Law of Small Numbers, 76 PSYCHOL. BULL. 105 (1971).
(80.) Maya Bar-Hillel & Willem A. Wagenaar, The Perception of Randomness, 12 ADVANCES APPLIED MATHEMATICS 428 (1991).
(82.) Thomas Gilovich, Robert Vallone & Amos Tversky, The Hot Hand in Basketball: On the Misperception of Random Sequences, 17 COGNITIVE PSYCHOL. 295 (1985).
(83.) See, e.g., Mark M. Carhart, On Persistence in Mutual Fund Performance, 52 J. FIN. 57 (1997); GUILLERMO BAQUERO, ON HEDGE FUND PERFORMANCE, CAPITAL FLOWS AND INVESTOR PSYCHOLOGY 89–126 (2006).
(85.) Eric Gold. & Gordon Hester, The Gambler’s Fallacy and the Coin’s Memory, in RATIONALITY AND SOCIAL RESPONSIBILITY: ESSAYS IN HONOR OF ROBYN MASON DAWES 21 (Joachim I. Krueger ed., 2008).
(86.) See, e.g., Charles Clotfelter & Phil Cook, The “Gambler’s Fallacy” in Lottery Play, 39 MGMT. SCI. 1521 (1993).
(87.) Howard Wainer & Harris L. Zwerling, Evidence That Smaller Schools Do Not Improve Student Achievement, 88 PHI DELTA KAPPAN 300 (2006).
(89.) Maurice Allais, Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’école Américaine, 21 ECONOMETRICA 503 (1953).
(90.) Daniel Kahneman & Amos Tversky, Prospect Theory: An Analysis of Decision under Risk, 47 ECONOMETRICA 263, 265–67, 280–84 (1979).
(91.) Amos Tversky & Daniel Kahneman, Advances in Prospect Theory: Cumulative Representation of Uncertainty, 5 J. RISK & UNCERTAINTY 297, 303 (1992).
(92.) Yuval Rottenstreich & Christopher K. Hsee, Money, Kisses, and Electric Shocks: On the Affective Psychology of Risk, 12 PSYCHOL. SCI. 185 (2001).
(94.) Amos Tversky & Daniel Kahneman, Availability: A Heuristic for Judging Frequency and Probability, 4 COGNITIVE PSYCHOL. 207 (1973) [hereinafter Tversky & Kahneman, Availability]. See also Amos Tversky & Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, 185 SCI. 1124, 1127–28 (1974) [hereinafter Tversky & Kahneman, Heuristics and Biases].
(95.) In addition to availability in the sense of the ease of retrieving information from memory, Tversky and Kahneman discussed also availability of construction, namely the ease of generating examples of items that meet a certain criterion, such as words starting with the letter “K” versus words in which the third letter is “K.” They showed that availability in that sense also affects judgments of frequency. Tversky & Kahneman, Availability, supra note 94, at 211–20.
(97.) For an overview, see Norbert Schwartz & Leigh Ann Vaughn, The Availability Heuristic Revisited: Ease of Recall and Content of Recall as Distinct Sources of Information, in HEURISTICS AND BIASES, supra note 14, at 103.
(98.) João N. Braga, Mário B. Ferreira & Steven J. Sherman, The Effects of Construal Level on Heuristic Reasoning: The Case of Representativeness and Availability, 2 DECISION 216 (2015). See also Cheryl Wakslak & Yaacov Trope, The Effect of Construal Level on Subjective Probability Estimates, 20 PSYCHOL. SCI. 52 (2010).
(100.) Steven J. Sherman et al., Imagining Can Heighten or Lower the Perceived Likelihood of Contracting a Disease: The Mediating Effect of Ease of Imagery, 11 PERSONALITY & SOC. PSYCHOL. BULL. 118 (1985).
(101.) Carmen Keller, Michael Siegrist & Heinz Gutscher, The Role of the Affect and Availability Heuristics in Risk Communication, 26 RISK ANALYSIS 631 (2006). On the affect heuristic, see generally Paul Slovic et al., The Affect Heuristic, in HEURISTICS AND BIASES, supra note 14, at 397.
(102.) Timur Kuran & Cass R. Sunstein, Availability Cascades and Risk Regulation, 51 STAN. L. REV. 683 (1999). For a specific example, see Russell Eisenman, Belief That Drug Usage in the United States Is Increasing when It Is Really Decreasing: An Example of the Availability Heuristic, 31 BULL. PSYCHONOMIC SOC’Y 249 (1993).
(103.) Availability cascades may even bring about moral panic, namely the disproportionate public reaction to perceived threats to moral values, coupled with widespread anxiety and strong hostility toward the people involved in the threatening activities. See ERICH GOODE & NACHMAN BEN-YEHUDA, MORAL PANICS: THE SOCIAL CONSTRUCTION OF DEVIANCE (2nd ed. 2009).
(104.) Tilmann Betsch & Devika Pohl, Tversky and Kahneman’s Availability Approach to Frequency Judgment: A Critical Analysis, in ETC. FREQUENCY PROCESSING AND COGNITION 109 (Peter Sedlmeier & Tilmann Betsch eds., 2002).
(105.) Diederik A. Stapel, Stephen D. Reicher & Russell Spears, Contextual Determinants of Strategic Choice: Some Moderators of the Availability Choice, 52 EUR. J. SOC. PSYCHOL. 141 (1995).
(106.) Brad M. Barber & Terrance Odean, All That Glitters: The Effect of Attention and News on the Buying Behavior of Individual and Institutional Investors, 21 REV. FIN. STUD. 785 (2008).
(107.) Baruch Fischhoff, Paul Slovic & Sarah Lichtenstein, Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Representation, 4 J. EXPERIMENTAL PSYCHOL.: HUM. PERCEPTION & PERFORMANCE 330 (1978).
(108.) See supra p. 24.
(109.) Amos Tversky & Derek J. Koehler, Support Theory: A Nonextensional Representation of Subjective Probability, 101 PSYCHOL. REV. 547 (1994). Subadditivity in probability judgments is possibly one manifestation of a broader phenomenon whereby an increase in the number of assessed categories results in higher assessment of their value, attractiveness, and so forth. See Ian Bateman et al., Does Part-Whole Bias Exist? An Experimental Investigation, 107 ECON. J. 322 (1997); Avishalom Tor & Dotan Oliar, Incentives to Create under a “Lifetime-Plus-Years” Copyright Duration: Lessons from a Behavioral Economic Analysis of Eldred v. Ashcroft, 36 LOY. L.A. L. REV. 437, 463–76 (2002) (surveying the literature).
(112.) Lorraine Chen Idson et al., The Relation between Probability and Evidence Judgment: An Extension of Support Theory, 22 J. RISK & UNCERTAINTY 227 (2001).
(114.) Baruch Fischhoff, Hindsight ≠ Foresight: The Effect of Outcome Knowledge on Judgment under Uncertainty, 1 J. EXPERIMENTAL PSYCHOL.: HUM. PERCEPTION & PERFORMANCE 288 (1975).
(115.) For reviews and meta-analyses, see Scott A. Hawkins & Reid Hastie, Hindsight: Biased Judgments of Past Events after the Outcomes are Known, 107 PSYCHOL. BULL. 311 (1990); Jay J.J. Christensen-Szalanski & Cynthia Fobian Willham, The Hindsight Bias: A Meta-analysis, 48 ORG. BEHAV. & HUM. DECISION PROCESSES 147 (1991); Rebecca L. Guilbault et al., A Meta-Analysis of Research on Hindsight Bias, 26 BASIC & APP. SOC. PSYCHOL. 103 (2004); Neal J. Roese & Kathleen D. Vohs, Hindsight Bias, 7 PERSP. PSYCHOL. SCI. 411 (2012).
(116.) See e.g., Baruch Fischhoff & Ruth Beyth, “I Knew It Would Happen” Remembered Probabilities of Once-Future Things, 13 ORG. BEHAV. & HUM. PERFORMANCE 1 (1975).
(117.) See Dustin P. Calvillo & Abraham M. Rutchick, Domain Knowledge and Hindsight Bias among Poker Players, 27 J. BEHAV. DECISION MAKING 259 (2014); Daniel M. Bernstein et al., Hindsight Bias from 3 to 95 Years of Age, 37 J. EXPERIMENTAL PSYCHOL.: LEARNING, MEMORY & COGNITION 378 (2011).
(118.) See e.g., John C. Anderson, D. Jordan Lowe, Philip M.J. Reckers, Evaluation of Auditor Decisions; Hindsight Bias Effects and the Expectation Gap, 14 J. ECON. PSYCHOL. 711 (1993) (auditing); Hal R. Arkes et al., Eliminating the Hindsight Bias, 73 J. APP. PSYCHOL. 305 (1988) (medicine); Kim A. Kamin & Jeffrey J. Rachlinski, Ex Post ≠ Ex Ante: Determining Liability in Hindsight, 19 LAW & HUM. BEHAV. 89 (1995) (law).
(121.) See id. at 415–16.
(122.) See, e.g., Baruch Fischhoff, Perceived Informativeness of Facts, 3 J. EXPERIMENTAL PSYCHOL.: HUM. PERCEPTION & PERFORMANCE 349, 354–6 (1977) (debiasing instructions); Wolfgang Hell et al., Hindsight Bias: An Interaction of Automatic and Motivational Factors?, 16 MEMORY & COGNITION 533 (1988) (financial incentives).
(125.) FRANK H. KNIGHT, RISK, UNCERTAINTY AND PROFIT (1921).
(126.) Daniel Ellsberg, Risk, Ambiguity, and the Savage Axioms. 75 Q.J. ECON. 643 (1961).
(127.) Laure Cabantous, Ambiguity Aversion in the Field of Insurance: Insurers Attitude to Imprecise and Conflicting Probability Estimates, 62 THEORY & DECISION 219 (2007). On source-dependence in ambiguity aversion, see also Stefan T. Trautmann & Gijs van de Kuilen, Ambiguity Attitudes, in WILEY BLACKWELL HANDBOOK, supra note 2, at 89, 94–96, 106–07.
(128.) Colin Camerer & Martin Weber, Recent Developments in Modeling Preferences: Uncertainty and Ambiguity, 5 J. RISK & UNCERTAINTY 325, 330–32 (1992).
(132.) Deborah Frisch & Jonathan Baron, Ambiguity and Rationality, 1 J. BEHAV. DECISION MAKING 149 (1988).
(133.) Chip Heath & Amos Tversky, Preference and Belief: Ambiguity and Competence in Choice under Uncertainty, 4 J. RISK & UNCERTAINTY 5 (1991).
(134.) Indirect support for this conjecture may be found in a subsequent study that showed that ambiguity aversion is present when people face a choice between risky and ambiguous bets, or when they compare themselves with more knowledgeable individuals, but not when such comparisons are unavailable. See Craig R. Fox & Amos Tversky, Ambiguity Aversion and Comparative Ignorance, 110 Q.J. ECON. 585 (1995).
(135.) LEONARD J. SAVAGE, THE FOUNDATIONS OF STATISTICS (1954).
(137.) Itzhak Gilboa & David Schmeidler, Maxmin Expected Utility with Non-unique Prior, 18 J. MATHEMATICAL ECON. 141 (1989).
(139.) See, e.g., Paul Slovic & Amos Tversky, Who Accepts Savage’s Axiom? 19 BEHAV. SCI. 368 (1974).
(141.) Tversky & Kahneman, supra note 91, at 303. For a review of the literature on risk attitude under prospect theory, see Craig R. Fox, Carsten Erner & Daniel J. Walters, Decision under Risk: From the Field to the Laboratory and Back, in WILEY BLACKWELL HANDBOOK, supra note 2, at 41. See also infra pp. 85–86.
(143.) John K. Horowitz & Kenneth E. McConnell, A Review of WTA/WTP Studies, 44 J. ENVTL. ECON. & MGMT. 426 (2002). On the endowment effect, see infra pp. 50–56.
(144.) Serdar Sayman & Ayşe Öncüler, Effects of Study Design Characteristics on the WTA-WTP Disparity: A Meta Analytical Framework, 26 J. ECON. PSYCHOL. 289, 300, 302 (2005).
(146.) See supra p. 34.
(147.) Daniel Kahneman, Maps of Bounded Rationality: Psychology for Behavioral Economists, 93 AM. ECON. REV. 1449, 1457 (2003).
(148.) Shlomo Benartzi & Richard H. Thaler, Myopic Loss Aversion and the Equity Premium Puzzle, 110 Q.J. ECON. 73 (1995).
(149.) See infra pp. 510–12.
(150.) Colin F. Camerer, Prospect Theory in the Wild: Evidence from the Field, in CHOICES, VALUES, AND FRAMES 288 (Daniel Kahneman & Amos Tversky eds., 2000); Stefano DellaVigna, Psychology and Economics: Evidence from the Field, 47 J. ECON. LITERATURE 315, 324–36 (2009); Barberis, supra note 5.
(152.) See also infra pp. 187–97. For further refinements of prospect theory’s claims about people’s attitude to risk and uncertainty, and competing accounts of these issues, see, e.g., Tversky & Kahneman, supra note 91; Amos Tversky & Craig R. Fox, Weighing Risk and Uncertainty, 102 PSYCHOL. REV. 269 (1995); Michael H. Birnbaum & Alfredo Chavez, Tests of Theories of Decision Making: Violations of Branch Independence and Distribution Independence, 71 ORG. BEHAV. & HUM. DECISION PROCESSES 161 (1997); Charles A. Holt & Susan K. Laury, Risk Aversion and Incentive Effects, 92 AM. ECON. REV. 1644 (2002); BARON, supra note 47, at 271–74. On the neural basis of these phenomena, see, e.g., Joshua A. Weller et al., Neural Correlates of Adaptive Decision Making for Risky Gains and Losses, 18 PSYCHOL. SCI. 958 (2007). Finally, on the evolutionary roots and neural basis of loss aversion and related phenomena, see ZAMIR, supra note 151, at 42–46.
(153.) See, e.g., Roy F. Baumeister et al., Bad Is Stronger than Good, 5 REV. GENERAL PSYCHOL. 323 (2001); Paul Rozin & Edward B. Royzman, Negativity Bias, Negativity Dominance, and Contagion, 5 PERSONALITY & SOC. PSYCHOL. REV. 296 (2001).
(154.) Karen S. Rook, The Negative Side of Social Interaction: Impact on Psychological Well-Being, 46 J. PERSONALITY & SOC. PSYCHOL. 1097 (1984).
(155.) See, e.g., Guy Hochman & Eldad Yechiam, Loss Aversion in the Eye and in the Heart: The Autonomic Nervous System’s Responses to Losses, 24 J. BEHAV. DECISION MAKING 140 (2011).
(157.) See, e.g., Peter Sokol-Hessner et al., Emotion Regulation Reduces Loss Aversion and Decreases Amygdala Responses to Losses, 8 SOC. COGNITIVE & AFFECTIVE NEUROSCI. 341 (2013).
(158.) Benedetto De Martino, Colin F. Camerer & Ralph Adolphs, Amygdala Damage Eliminates Monetary Loss Aversion, 107 PROCED. NAT’L ACAD. SCI. USA 3788 (2010).
(159.) Peter A. Bibby & Eamonn Ferguson, The Ability to Process Emotional Information Predicts Loss Aversion, 51 PERSONALITY & INDIVIDUAL DIFFERENCES 263 (2011).
(160.) See, e.g., Botond Köszegi & Matthew Rabin, Reference-Dependent Risk Attitude, 97 AM. ECON. REV. 1047 (2007); Johannes Abeler et al., Reference Points and Effort Provision, 101 AM. ECON. REV. 470 (2011).
(161.) Daniel Kahneman & Amos Tversky, Choices, Values, and Frames, 39 AM. PSYCHOLOGIST 341, 349 (1984).
(162.) See, e.g., Hal R. Arkes et al., Reference Point Adaptation: Tests in the Domain of Security Trading, 105 ORG. BEHAV. & HUM. DECISION PROCESSES 67 (2008); Daniel Kahneman, Jack L. Knetsch & Richard H. Thaler, Experimental Tests of the Endowment Effect and the Coase Theorem, 98 J. POL. ECON. 1325 (1990).
(163.) Philip Brickman, Dan Coates & Ronnie Janoff-Bulman, Lottery Winners and Accident Victims: Is Happiness Relative?, 36 J. PERSONALITY & SOC. PSYCHOL. 917 (1978); Jason Riis et al., Ignorance of Hedonic Adaptation to Hemodialysis: A Study Using Ecological Momentary Assessment, 134 J. EXPERIMENTAL PSYCHOL.: GENERAL 3 (2005). See also infra pp. 343–48.
(164.) See generally ZAMIR, supra 151, at 9–10.
(165.) Chip Heath, Richard P. Larrick & George Wu, Goals as Reference Points, 38 COGNITIVE PSYCHOL. 79 (1999); Russell Korobkin, Aspirations and Settlement, 88 CORNELL L. REV. 1, 44–48 (2002); Abeler et al., supra note 160.
(166.) Excessively high rewards may, however, produce the opposite effect. See Heath, Larrick & Wu, supra note 165, at 89–93; Vikram S. Chib et al., Neural Mechanisms Underlying Paradoxical Performance for Monetary Incentives Are Driven by Loss Aversion, 74 NEURON 582 (2012).
(167.) See infra pp. 76–86.
(168.) Amos Tversky & Daniel Kahneman, The Framing of Decisions and the Psychology of Choice, 211 SCI. 453, 453 (1981).
(169.) For various typologies of these paradigms, see Anton Kühberger, The Influence of Framing on Risky Decisions: A Meta-analysis, 75 ORG. BEHAV. & HUM. DECISION PROCESSES 23 (1998); Irwin P. Levin, Sandra L. Schneider & Gary J. Gaeth, All Frames Are Not Created Equal: A Typology and Critical Analysis of Framing Effects, 76 ORG. BEHAV. & HUM. DECISION PROCESSES 149 (1998).
(172.) Beth E. Meyerowitz & Shelly Chaiken, The Effect of Message Framing on Breast Self-Examination Attitudes, Intentions, and Behavior, 52 J. PERSONALITY & SOC. PSYCHOL. 500 (1987) (finding an effect); Karen M. Lalor & B. Jo Hailey, The Effects of Message Framing and Feelings of Susceptibility to Breast Cancer on Reported Frequency of Breast Self-Examination, 10 INT’L Q. COMMUNITY HEALTH EDUC. 183 (1990) (failing to replicate Meyerowitz and Chaiken’s results). For a literature review, see Levin, Schneider & Gaeth, supra note 169, at 167–78; Kühberger, supra note 169, at 32–33, 37–38 (concluding, on the basis of meta-analysis of thirteen studies using the message compliance design— the equivalent of goal framing—that this design does not generally produce a framing effect).
(174.) Irwin P. Levin & Gary J. Gaeth, How Consumers Are Affected by the Framing of Attribute Information Before and After Consuming the Product, 15 J. CONSUMER RES. 374 (1988).
(177.) Lewis Petrinovich & Patricia O’Neill, Influence of Wording and Framing Effects on Moral Intuitions, 17 ETHOLOGY & SOCIOBIOLOGY 145, 162–64 (1996); Levin, Schneider & Gaeth, supra note 169, at 153, 174.
(178.) See infra pp. 179–82, 249–52, 427–28.
(179.) See infra pp. 286–87, 292–96.
(180.) See, e.g., Laura A. Siminoff & John H. Fetting, Effects of Outcome Framing on Treatment Decisions in the Real World: Impact of Framing on Adjuvant Breast Cancer Decisions, 9 MED. DECISION MAKING 262 (1989); Annette M. O’Connor, Ross A. Penne & Robert E. Dales, Framing Effects on Expectations, Decisions, and Side Effects Experienced: The Case of Influenza Immunization, 49 J. CLINICAL EPIDEMIOLOGY 1271 (1996) (describing the results of a field experiment).
(181.) James N. Druckman, Using Credible Advice to Overcome Framing Effects, 17 J.L. ECON. & ORG. 62 (2001).
(182.) Jack S. Levy, Applications of Prospect Theory to Political Science, 135 SYNTHESE 215, 218 (2003).
(183.) William Samuelson & Richard Zeckhouser, Status Quo Bias in Decision Making, 1 J. RISK & UNCERTAINTY 7 (1988); Daniel Kahneman, Jack L. Knetsch & Richard H. Thaler, The Endowment Effect, Loss Aversion, and Status Quo Bias, 5 J. ECON. PERSP. 193, 197–99 (1991).
(184.) Maurice Schweitzer, Disentangling Status Quo and Omission Effects: An Experimental Analysis, 58 ORG. BEHAV. & HUM. DECISION PROCESSES 457 (1994).
(185.) Ilana Ritov & Jonathan Baron, Status-Quo and Omission Biases, 5 J. RISK & UNCERTAINTY 49 (1992).
(186.) Samuelson & Zeckhouser, supra note 183, at 12–21. For an empirical support of this phenomenon, see Alexander Kempf & Stefan Ruenzi, Status Quo Bias and the Number of Alternatives: An Empirical Illustration from the Mutual Fund Industry, 7 J. BEHAV. FIN. 204 (2006).
(187.) Brigitte Madrian & Dennis Shea, The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior, 66 Q.J. ECON. 1149 (2001). See also infra p. 180.
(188.) Eric J. Johnson & Daniel Goldstein, Do Defaults Save Lives?, 302 SCI. 1338 (2003).
(190.) See, e.g., Kahneman, Knetsch & Thaler, supra note 183, at 197–99; Jonathan Baron & Ilana Ritov, Reference Points and Omission Bias, 59 ORG. BEHAV. & HUM. DECISION PROCESSES 475, 479–80 (1994); Avital Moshinsky & Maya Bar-Hillel, Status Quo Label Bias, 28 SOC. COGNITION 191 (2010).
(192.) Craig R.M. McKenzie, Michael J. Liersch, & Stacey R. Finkelstein, Recommendations Implicit in Policy Defaults, 17 PSYCHOL. SCI. 414 (2006).
(193.) Scott Eidelman & Christian S. Crandall, Bias in Favor of the Status Quo, 6 SOC. & PERSONALITY PSYCHOL. COMPASS 270, 272 (2012). See also infra p. 106.
(195.) Mark Spranca, Elisa Minsk & Jonathan Baron, Omission and Commission in Judgment and Choice, 27 J. EXPERIMENTAL SOC. PSYCHOL. 76 (1991); Johanna H. Kordes-de Vaal, Intention and the Omission Bias: Omissions Perceived as Nondecisions, 93 ACTA PSYCHOLOGICA 161 (1996); Peter DeScioli, John Christner & Robert Kurzban, The Omission Strategy, 22 PSYCHOL. SCI. 442 (2011).
(196.) Ilana Ritov & Jonathan Baron, Reluctance to Vaccinate: Omission Bias and Ambiguity, 3 J. BEHAV. DECISION MAKING 263 (1990).
(197.) Richard H. Thaler, Toward a Positive Theory of Consumer Choice, 1 J. ECON. BEHAV. & ORG. 39 (1980); Kahneman, Knetsch & Thaler, supra note 162; Russell Korobkin, Wrestling with the Endowment Effect, or How to Do Law and Economics without the Coase Theorem, in THE OXFORD HANDBOOK OF BEHAVIORAL ECONOMICS AND THE LAW, supra note 8, at 300.
(198.) Russell Korobkin, The Endowment Effect and Legal Analysis, 97 NW. U. L. REV. 1227, 1228 (2003).
(199.) See supra p. 9; infra pp. 232–34.
(200.) See, e.g., C.H. Coombs, T.G. Bezembinder & F.M. Goode, Testing Expectation Theories of Decision Making without Measuring Utility or Subjective Probability, 4 J. MATH. PSYCHOL. 72 (1967); JUDD HAMMACK & GARDNER M. BROWN JR., WATERFOWL AND WETLANDS: TOWARD BIO-ECONOMIC ANALYSIS 26–27 (1974).
(202.) See, e.g., Jack L. Knetsch & J.A. Sinden, Willingness to Pay and Compensation Demanded: Experimental Evidence of an Unexpected Disparity in Measures of Value, 99 Q.J. ECON. 507 (1984); Kahneman, Knetsch & Thaler, supra note 162.
(203.) See, e.g., Jack L. Knetsch, The Endowment Effect and Evidence of Nonreversible Indifference Curves, 79 AM. ECON. REV. 1277 (1989). Jack L. Knetsch, Preferences and Nonreversibility of Indifference Curves, 17 J. ECON. BEHAV. & ORG. 131 (1992).
(204.) Guido Ortona & Francesco Scacciati, New Experiments on the Endowment Effect, 13 J. ECON. PSYCHOL. 277 (1992); Vera Hoorens, Nicole Remmers & Kamieke Van de Reit, Time Is an Amazingly Variable Amount of Money: Endowment and Ownership Effects in the Subjective Value of Working Time, 20 J. ECON. PSYCHOL. 383 (1999).
(205.) W. Kip Viscusi, Wesley A. Magat & Joel Huber, An Investigation of the Rationality of Consumer Valuations of Multiple Health Risks, 18 RAND J. ECON. 465, 469, 477–78 (1987).
(206.) Russell Korobkin, The Status Quo Bias and Contract Default Rules, 83 CORNELL L. REV. 608 (1998).
(207.) Maya Bar-Hillel & Efrat Neter, Why Are People Reluctant to Exchange Lottery Tickets?, 70 J. PERSONALITY & SOC. PSYCHOL. 17, 22–24 (1996). When items are identical and there is no incentive to trade, no trade is expected. See David Gal, A Psychological Law of Inertia and the Illusion of Loss Aversion, 1 JUDGMENT & DECISION MAKING 23, 26–27 (2006).
(208.) Gretchen B. Chapman, Similarity and Reluctance to Trade, 11 J. BEHAV. DECISION MAKING 47 (1998).
(209.) Eric van Dijk & Daan van Knippenberg, Trading Wine: On the Endowment Effect, Loss Aversion, and the Comparability of Consumer Goods, 19 J. ECON. PSYCHOL. 485 (1998); Jason F. Shogren et al., Resolving Differences in Willingness to Pay and Willingness to Accept, 84 AM. ECON. REV. 255 (1994).
(210.) Nathan Novemsky & Daniel Kahneman, The Boundaries of Loss Aversion, 32 J. MARKETING RES. 119 (2005).
(211.) Kahneman, Knetsch & Thaler, supra note 162; Amos Tversky & Daniel Kahneman, Loss Aversion in Riskless Choice: A Reference-Dependent Model, 106 Q.J. ECON. 1039, 1055 (1991); Novemsky & Kahneman, supra note 210, at 124–25.
(212.) Samuelson & Zeckhouser, supra note 183, at 12–22 (investment portfolios); van Dijk & van Knippenberg, supra note 209 (bargaining chips). Cf. Bar-Hillel & Neter, supra note 207 (lottery tickets).
(213.) George Loewenstein & Samuel Issacharoff, Source Dependence in the Valuation of Objects, 7 J. BEHAV. DECISION MAKING 157 (1994).
(214.) Therese Jefferson & Ross Taplin, An Investigation of the Endowment Effect Using a Factorial Design, 32 J. ECON. PSYCHOL. 899 (2011).
(215.) Christopher Buccafusco & Christopher Jon Sprigman, The Creativity Effect, 78 U. CHI. L. REV. 31 (2011). See also infra pp. 226–27.
(216.) John List, Does Market Experience Eliminate Market Anomalies?, 41 Q.J. ECON. 46 (2003); John A. List, Does Market Experience Eliminate Market Anomalies? The Case of Exogenous Market Experience, 101(3) AM. ECON. REV. PAPERS & PROC. 313 (2011).
(218.) But see Charles R. Plott & Kathryn Zeiler, The Willingness to Pay—Willingness to Accept Gap, the “Endowment Effect,” Subject Misconceptions, and Experimental Procedures for Eliciting Valuations, 95 AM. ECON. REV. 530 (2005) [hereinafter Plott & Zeiler, The WTP-WTA Gap]; Charles R. Plott & Kathryn Zeiler, Exchange Asymmetries Incorrectly Interpreted as Evidence of Endowment Effect Theory and Prospect Theory?, 97 AM. ECON. REV. 1449 (2007) [hereinafter Plott & Zeiler, Exchange Asymmetries]; Gregory Klass & Kathryn Zeiler, Against Endowment Theory: Experimental Economics and Legal Scholarship, 61 UCLA L. REV. 2 (2013). Note that a WTA-WTP disparity may exist even if, contrary to both expected utility theory and prospect theory, the WTA and WTP are hardly correlated within subjects. See Jonathan Chapman et al., Willingness-to-Pay and Willingness-to-Accept Are Probably Less Correlated than You Think (working paper, 2017, available at: https://ssrn.com/abstract=2988958).
(219.) For overviews, see Thomas C. Brown & Robin Gregory, Why the WTA-WTP Disparity Matters?, 28 ECOLOGICAL ECON. 323, 326–29 (1999); Korobkin, supra note 197, at 304–18; Carey K. Morewedge & Colleen E. Giblin, Explanations of the Endowment Effect: An Integrative Review, 19 TRENDS COGNITIVE SCI. 339 (2015); Kathryn Zeiler, What Explains Observed Reluctance to Trade? A Comprehensive Literature Review, in RESEARCH HANDBOOK ON BEHAVIORAL LAW AND ECONOMICS (Kathryn Zeiler & Joshua Teitelbaum eds., 2018, available at: https://ssrn.com/abstract=2862021).
(220.) Thaler, supra note 197, at 44. See also Tversky & Kahneman, supra note 211; Michal A. Strahilevitz & George Loewenstein, The Effect of Ownership History on the Valuation of Objects, 25 J. CONSUMER RES. 276 (1998); Brown & Gregory, supra note 219, at 327.
(221.) See infra pp. 58–76.
(222.) K.J. Beggan, On the Social Nature of Nonsocial Perception: The Mere Ownership Effect, 62 J. PERSONALITY & SOC. PSYCHOL. 229 (1992) (but see Michael J. Barone, Terence A. Shimp & David E. Sprott, Mere Ownership Revisited: A Robust Effect?, 6 J. CONSUMER PSYCHOL. 257 (1997)). On psychological ownership and the endowment effect, see infra pp. 203–04, 209–13. For a theory of the endowment effect as self-enhancement in response to a threat—combining elements of ownership and loss aversion—see Promothesh Chatterjee, Caglar Irmak & Randall L. Rose, The Endowment Effect as Self-Enhancement in Response to Threat, 40 J. CONSUMER RES. 460 (2013).
(223.) Carey K. Morewedge et al., Bad Riddance or Good Rubbish? Ownership and Not Loss Aversion Causes the Endowment Effect, 45 J. EXPERIMENTAL SOC. PSYCHOL. 947 (2009).
(224.) Nathaniel J.S. Ashby, Stephan Dickert & Andreas Glöckner, Focusing on What You Own: Biased Information Uptake due to Ownership, 7 JUDGMENT & DECISION MAKING 254 (2012); Morewedge & Giblin, supra note 219.
(225.) On cost of regret, see infra pp. 505–07.
(226.) Andrea Isoni, Graham Loomes & Robert Sugden, The Willingness to Pay—Willingness to Accept Gap, the “Endowment Effect,” Subject Misconceptions, and Experimental Procedures for Eliciting Valuations: Comment, 101 AM. ECON. REV. 991 (2011).
(227.) W. Michael Hanemann, Willingness to Pay and Willingness to Accept: How Much Can They Differ?, 81 AM. ECON. REV. 635 (1991).
(230.) Russell Korobkin, Policymaking and the Offer/Asking Price Gap: Toward a Theory of Efficient Entitlement Allocation, 46 STAN. L. REV. 663, 693–96 (1994).
(233.) Ray Weaver & Shane Frederick, A Reference Price Theory of the Endowment Effect, 49 J. MARKETING RES. 696 (2012); Itamar Simonson & Aimee Drolet, Anchoring Effects on Consumers’ Willingness to Pay and Willingness to Accept, 31 J. CONSUMER RES. 681 (2004).
(238.) The latter procedure was used, for example, by Kentsch and Wong: Jack L. Kentsch & Wei-Kang Wong, The Endowment Effect and the Reference State: Evidence and Manipulations, 71 J. ECON. BEHAV. & ORG. 407 (2009).
(241.) Knetsch & Wong, supra note 238; Weining Koh & Wei-Kang Wong, The Endowment Effect and the Willingness to Accept-Willingness to Pay Gap: Subject Misconceptions or Reference Dependence? (working paper, 2011), available at: http://courses.nus.edu.sg/course/ecswong/research/WTA-WTP.pdf ).
(243.) Hal R. Arkes & Catherine Blumer, The Psychology of Sunk Costs, 35 ORG. BEHAV. & HUM. DECISION PROCESSES 124, 127–29 (1985).
(245.) Anne M. McCarthy, F. David Schoorman & Arnold C. Cooper, Reinvestment Decisions by Entrepreneurs: Rational Decision-Making or Escalation of Commitment?, 8 J. BUS. VENTURING 9 (1993).
(247.) See, e.g., McCarthy, Schoorman & Cooper, supra note 245; Barry M. Staw & Ha Hoang, Sunk Costs in the NBA: Why Draft Order Affects Playing Time and Survival in Professional Basketball, 40 ADMIN. SCI. Q. 474 (1995).
(248.) Barry M. Staw & Jerry Ross, Understanding Behavior in Escalation Situations, 246 SCI. 216 (1989); Gillian Ku, Learning to De-escalate: The Effects of Regret in Escalation of Commitment, 105 ORG. BEHAV. & HUM. DECISION PROCESSES 221, 222–23 (2008).
(249.) Barry M. Staw, Knee-Deep in the Big Muddy: A Study of Escalating Commitment to a Chosen Course of Action, 16 ORG. BEHAV. & HUM. DECISION PROCESSES 27 (1976); Joel Brockner, The Escalation of Commitment to a Failing Course of Action: Toward Theoretical Progress, 17 ACAD. MGMT. REV. 39 (1992).
(250.) See infra pp. 59–61.
(251.) Richard H. Thaler & Eric J. Johnson, Gambling with the House Money and Trying to Break Even: The Effects of Prior Outcomes on Risky Choice, 36 MGMT. SCI. 643 (1990).
(256.) Chip Heath, Escalation and De-escalation of Commitment in Response to Sunk Costs: The Role of Budgeting in Mental Accounting, 62 ORG. BEHAV. & HUM. DECISION PROCESSES 38 (1995).
(257.) LEON FESTINGER, A THEORY OF COGNITIVE DISSONANCE (1957).
(258.) Peter C. Wason, On the Failure to Eliminate Hypotheses in a Conceptual Task, 12 Q.J. EXPERIMENTAL PSYCHOL. 129 (1960) [hereinafter Wason, Failure]; Peter C. Wason, Reasoning about a Rule, 20 Q.J. EXPERIMENTAL PSYCHOL. 273 (1968).
(259.) For overviews of different parts of this body of research, see Ziva Kunda, The Case for Motivated Reasoning, 108 PSYCHOL. BULL. 480 (1990); Raymond S. Nickerson, Confirmation Bias: A Ubiquitous Phenomenon in Many Guises, 2 REV. GENERAL PSYCHOL. 175 (1998); Hahn & Harris, supra note 2; BARON, supra note 47, at 199–227; Symposium, Motivated Beliefs, 30 J. ECON. PERSP. 133–212 (2016).
(261.) Steven E. Clark & Gary L. Wells, On the Diagnosticity of Multiple-Witness Identifications, 23 LAW & HUM. BEHAV. 406 (2008).
(263.) Id.; Lisa L. Shu, Francesca Gino & Max H. Bazerman, Dishonest Deed, Clear Conscience: When Cheating Leads to Moral Disengagement and Motivated Forgetting, 37 PERSONALITY & SOC. PSYCHOL. BULL. 330 (2011).
(264.) On dual-system theories of judgment and decision-making, see supra pp. 21–23.
(265.) Lindsley G. Boiney, Jane Kennedy & Pete Nye, Instrumental Bias in Motivated Reasoning: More When More Is Needed, 72 ORG. BEHAV. & HUM. DECISION PROC. 1 (1997).
(268.) Anna Coenen, Bob Rehder & Todd M. Gureckis, Strategies to Intervene on Causal Systems Are Adaptively Selected, 79 COGNITIVE PSYCHOL. 102 (2015).
(269.) See generally Andrew J. Wistrich & Jeffrey J. Rachlinski, How Lawyers’ Intuitions Prolong Litigation, 86 S. CAL. L. REV. 571, 594–96 (2013).
(271.) Kari Edwards & Edward E. Smith, A Disconfirmation Bias in the Evaluation of Arguments, 71 J. PERSONALITY & SOC. PSYCHOL. 5 (1996).
(272.) See, e.g., Lee Ross, Mark R. Lepper & Michael Hubbard, Perseverance in Self-Perception and Social Perception: Biased Attributional Processes in the Debriefing Paradigm, 32 J. PERSONALITY & SOC. PSYCHOL. 880 (1975).
(273.) See infra pp. 82–83.
(274.) See, e.g., Charles G. Lord, Lee Ross & Mark R. Lepper, Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence, 37 J. PERSONALITY & SOC. PSYCHOL. 2098 (1979).
(276.) Dolores Albarracín & Amy L. Mitchell, The Role of Defensive Confidence in Preference for Proattitudinal Information: How Believing That One Is Strong Can Sometimes Be a Defensive Weakness, 30 PERSONALITY & SOC. PSYCHOL. BULL. 1565 (2004).
(277.) Keith E. Stanovich, Richard F. West & Maggie E. Toplak, Myside Bias, Rational Thinking, and Intelligence, 22 CURRENT DIRECTIONS PSYCHOL. SCI. 259 (2013).
(278.) Erika Price et al., Open-Minded Cognition, 41 PERSONALITY & SOC. PSYCHOL. BULL. 1488 (2015).
(279.) Jonathan Baron, Myside Bias in Thinking about Abortion, 1 THINKING & REASONING 221 (1995).
(283.) For a thoughtful taxonomy of phenomena related to overoptimism, see Paul D. Windschitl & Jillian O’Rourke Stuart, Optimism Biases: Types and Causes, in 2 WILEY BLACKWELL HANDBOOK, supra note 2, at 431, 432–36.
(284.) See infra pp. 64–66.
(286.) Francis W. Irwin, Stated Expectations as Functions of Probability and Desirability of Outcomes, 21 J. PERSONALITY 329 (1953).
(287.) Neil D. Weinstein, Unrealistic Optimism about Future Life Events, 39 J. PERSONALITY & SOC. PSYCHOL. 806 (1980); Zlatan Krizan, Jeffrey C. Miller & Omesh Johar, Wishful Thinking in the 2008 U.S. Presidential Election, 21 PSYCHOL. SCI. 140 (2010); Cade Massey, Joseph P. Simmons & David A. Armor, Hope over Experience: Desirability and the Persistence of Optimism, 22 PSYCHOL. SCI. 274 (2011).
(288.) Lynn A. Baker & Robert E. Emery, When Every Relationship Is above Average: Perceptions and Expectations of Divorce at the Time of Marriage, 17 LAW & HUM. BEHAV. 439, 443 (1993).
(289.) Ola Svenson, Are We All Less Risky and More Skillful than Our Fellow Drivers?, 47 ACTAPSYCHOLOGICA 143 (1981).
(290.) Justin Kruger & David Dunning, Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments, 77 J. PERSONALITY & SOC. PSYCHOL. 1121 (1999).
(291.) Mark D. Alicke, Global Self-Evaluation as Determined by the Desirability and Controllability of Trait Adjectives, 49 J. PERSONALITY & SOC. PSYCHOL. 1621 (1985).
(292.) David Dunning, Judith A. Meyerowitz & Amy D. Holzberg, Ambiguity and Self-Evaluation: The Role of Idiosyncratic Trait Definitions in Self-Serving Assessments of Ability, 57 J. PERSONALITY & SOC. PSYCHOL. 1082 (1989).
(293.) Mark D. Alicke et al., Personal Contact, Individuation, and the Above-Average Effect, 68 J. PERSONALITY & SOC. PSYCHOL. 804 (1995). On these and other variables, see also Peter R. Harris, Dale W. Griffin & Sandra Murray, Testing the Limits of Optimistic Bias: Event and Person Moderators in a Multilevel Framework, 95 J. PERSONALITY & SOC. PSYCHOL. 1225 (2008).
(294.) Tali Sharot, The Optimism Bias, 21 CURRENT BIOLOGY R941, R942–44 (2011).
(295.) Amy H. Mezulis et al., Is There a Universal Positivity Bias in Attributions? A Meta-analytic Review of Individual, Developmental, and Cultural Differences in the Self-Serving Attributional Bias, 130 PSYCHOL. BULL. 711 (2004). On attribution theories, see also infra pp. 68–69.
(296.) Matthew Fisher & Frank C. Keil, The Illusion of Argument Justification, 143 J. EXPERIMENTAL PSYCHOL.: GENERAL 425 (2014).
(297.) Justin Kruger & Jeremy Burrus, Egocentrism and Focalism in Unrealistic Optimism (and Pessimism), 40 J. EXPERIMENTAL SOC. PSYCHOL. 332 (2004); John R. Chambers, Paul D. Windschitl & Jerry Suls, Egocentrism, Event Frequency, and Comparative Optimism: When What Happens Frequently Is “More Likely to Happen to Me,” 29 PERSONALITY & SOC. PSYCHOL. BULL. 1343 (2003).
(298.) Maya Bar-Hillel, David V. Budesku & Moti Amar, Predicting World Cup Results: Do Goals Seem More Likely When They Pay off?, 15 PSYCHONOMIC BULL. & REV. 278 (2008).
(299.) George Loewenstein, Ted O’Donoghue & Matthew Rabin, Projection Bias in Predicting Future Utility, 118 Q.J. ECON. 1209 (2003).
(302.) Michael T. Moore & David M. Fresco, Depressive Realism: A Meta-analytic Review, 32 CLINICAL PSYCHOL. REV. 496 (2012).
(304.) Harold Sigall, Arie Kruglanski & Jack Fyock, Wishful Thinking and Procrastination, 15 J. SOC. BEHAV. & PERSONALITY 283 (2000). On procrastination, see also infra pp. 87–88.
(305.) See also infra pp. 298–99.
(306.) Colin Camerer & Dan Lovallo, Overconfidence and Excess Entry: An Experimental Approach, 89 AM. ECON. REV. 306 (1999). See also infra pp. 385–86.
(307.) Linda Babcock & George Loewenstein, Explaining Bargaining Impasse: The Role of Self-Serving Biases, 11 J. ECON. PERSP. 109 (1997). See also infra pp. 497–500.
(308.) Don A. Moore & Paul J. Healy, The Trouble with Overconfidence, 115 PSYCHOL. REV. 502 (2008); Don A. Moore, Elizabeth R. Tenney & Uriel Haran, Overprecision in Judgment, in WILEY BLACKWELL HANDBOOK, supra note 2, at 182, 183–84. See also supra pp. 61–64.
(309.) For overviews, see Sarah Lichtenstein, Baruch Fischhoff & Lawrence D. Phillips, Calibration of Probabilities: The State of the Art to 1980, in JUDGMENT UNDER UNCERTAINTY, supra note 77, at 306; Nickerson, supra note 259, at 188–89; Ulrich Hoffrage, Overconfidence, in COGNITIVE ILLUSIONS: A HANDBOOK ON FALLACIES AND BIASES IN THINKING, JUDGEMENT AND MEMORY 235 (Rüdiger Pohl ed., 2004).
(312.) Peter Juslin, The Overconfidence Phenomenon as a Consequence of Informal Experimenter-Guided Selection of Almanac Items, 57 ORG. BEHAV. & HUM. DECISION PROCESSES 226 (1994).
(313.) Gerd Gigerenzer, Ulrich Hoffrage & Heinz Kleinbölting, Probabilistic Mental Models: A Brunswikian Theory of Confidence, 98 PSYCHOL. REV. 506 (1991).
(314.) Arthur M. Glenberg, Alex Cherry Wilkinson & William Epstein, The Illusion of Knowing: Failure in the Self-Assessment of Comprehension, 10 MEMORY & COGNITION 597 (1982).
(315.) Hal R. Arkes, Robyn M. Dawes & Caryn Christensen, Factors Influencing the Use of a Decision Rule in a Probabilistic Task, 37 ORG. BEHAV. & HUM. DECISION PROCESSES 93 (1986).
(316.) Allan H. Murphy & Robert L. Winkler, Can Weather Forecasters Formulate Reliable Probability Forecasts of Precipitation and Temperature?, 2 NAT’L WEATHER DIG. 2 (1977).
(317.) Nickerson, supra note 259, at 189; Moore, Tenney & Haran, supra note 308, at 187–88, 189; Jane Goodman-Delahunty et al., Insightful or Wishful: Lawyers’ Ability to Predict Case Outcomes, 16 PSYCHOL. PUB. POL’Y & L. 133 (2010); Craig R.M. McKenzie, Michael J. Liersch & Ilan Yaniv, Overconfidence in Interval Estimates: What Does Expertise Buy You?, 107 ORG. BEHAVIOR & HUM. DECISION PROCESSES 179 (2008); Itzhak Ben-David, John R. Graham & Campbell R. Harvey, Managerial Miscalibration, 128 Q.J. ECON. 1547 (2013). See also Deborah J. Miller, Elliot S. Spengler & Paul M. Spengler, A Meta-analysis of Confidence and Judgment Accuracy in Clinical Decision Making, 62 J. COUNSELING PSYCHOL. 553 (2015) (a meta-analysis revealing a small but statistically significant correlation between counseling psychologists’ confidence and the accuracy of their judgments); infra pp. 513–14.
(318.) See, e.g., Eta S. Berner & Mark L. Graber, Overconfidence as a Cause of Diagnostic Error in Medicine, 121 AM. J. MED. S2 (2008).
(321.) See infra pp. 499–500, 513–14.
(322.) Albert H. Hastorf & Hadley Cantril, They Saw a Game: A Case Study, 49 J. ABNORMAL & SOC. PSYCHOL. 129 (1954).
(323.) An experiment in which subjects of opposing ideological inclinations were asked to describe a political demonstration yielded similar results. See Dan M. Kahan et al., “They Saw a Protest”: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 STAN. L. REV. 851 (2012).
(324.) Lee Ross & Andrew Ward, Naive Realism in Everyday Life: Implications for Social Conflict and Misunderstanding, in VALUES AND KNOWLEDGE 103 (Edward S. Reed, Elliot Turiel & Terrance Brown eds., 1996).
(326.) Thomas Gilovich, Differential Construal and the False Consensus Effect, 59 J. PERSONALITY & SOC. PSYCHOL. 623 (1990).
(327.) Lee Ross, David Greene & Pamela House, The “False Consensus Effect”: An Egocentric Bias in Social Perception and Attribution Processes, 13 J. EXPERIMENTAL SOC. PSYCHOL. 279 (1977).
(329.) Emily Pronin, Daniel Y. Lin & Lee Ross, The Bias Blind Spot: Perceptions of Bias in Self versus Others, 28 PERSONALITY & SOC. PSYCHOL. BULL. 369 (2002).
(330.) Emily Pronin, Tom Gilovich & Lee Ross, Objectivity in the Eye of the Beholder: Divergent Perceptions of Bias in Self versus Others, 111 PSYCHOL. REV. 781 (2004).
(331.) In a seminal study, pro-Israeli and pro-Arab subjects watched the same media reports of the massacre conducted by Falangist gunmen on the Sabra and Chatilla refugee camps. Both groups overwhelmingly saw the media coverage as slanted in favor of the other side and both recalled more negative references to their side. See Robert P. Vallone, Lee Ross & Mark R. Lepper, The Hostile Media Phenomenon: Biased Perception and Perceptions of Media Bias in Coverage of the Beirut Massacre, 49 J. PERSONALITY & SOC. PSYCHOL. 577 (1985).
(332.) Edward E. Jones & Victor A. Harris, The Attribution of Attitudes, 3 J. EXPERIMENTAL SOC. PSYCHOL. 1 (1967).
(333.) Lee Ross, The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process, 10 ADVANCES EXPERIMENTAL SOC. PSYCHOL. 173, 184–87 (1977).
(334.) For an overview, see Daniel T. Gilbert & Patrick S. Malone, The Correspondence Bias, 117 PSYCHOL. BULL. 21 (1995). For a discussion of the fundamental attribution error within the broader context of cognitive social psychology, see LEE ROSS & RICHARD E. NISBETT, THE PERSON AND THE SITUATION: PERSPECTIVES OF SOCIAL PSYCHOLOGY (1991).
(335.) John Sabini, Michael Siepmann & Julia Stein, The Really Fundamental Attribution Error in Social Psychological Research, 12 PSYCHOL. INQUIRY 1 (2001).
(337.) Id. at 27–28. For famous demonstrations of people’s conformity, see Solomon E. Asch, Effects of Group Pressure upon the Modification and Distortion of Judgments, in GROUPS, LEADERSHIP AND MEN; RESEARCH IN HUMAN RELATIONS 177 (Harold Guetzkow ed., 1951); Stanley Milgram, Behavioral Study of Obedience, 67 J. ABNORMAL & SOC. PSYCHOL. 371 (1963). For an overview, see Donelson R. Forsyth, Social Influence and Group Behavior, in HANDBOOK OF PSYCHOLOGY, Vol. 5: PERSONALITY AND SOCIAL PSYCHOLOGY 305–328 (Irving B. Weiner, Howard A. Tennen & Jerry M. Suls eds., 2d ed. 2012).
(339.) Daniel T. Gilbert, Brett W. Pelham & Douglas S. Krull, On Cognitive Busyness: When Person Perceivers Meet Persons Perceived, 54 J. PERSONALITY & SOC. PSYCHOL. 733 (1988).
(340.) Joseph P. Forgas, On Being Happy and Mistaken: Mood Effects on the Fundamental Attribution Error, 75 J. PERSONALITY & SOC. PSYCHOL. 318 (1998).
(341.) Ara Norenzayan & Richard E. Nissbet, Culture and Causal Cognition, 9 CURRENT DIRECTIONS PSYCHOL. SCI. 132 (2000). But see Douglas S. Krull et al., The Fundamental Attribution Error: Correspondence Bias in Individualist and Collectivist Cultures, 25 PERSONALITY & SOC. PSYCHOL. BULL. 1208 (1999).
(342.) For overviews, see Roger Buehler, Dale Griffin & Michael Ross, Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions, in HEURISTICS AND BIASES, supra note 97, at 250; Roger Buehler, Dale Griffin & Johanna Peetz, The Planning Fallacy: Cognitive, Motivational, and Social Origins, 42 ADVANCES EXPERIMENTAL SOCIAL PSYCHOLOGY 1 (2010).
(343.) Daniel Kahneman & Amos Tversky, Intuitive Prediction: Biases and Corrective Procedures, in JUDGMENT UNDER UNCERTAINTY, supra note 77, at 414. See also David Lagnado & Steven Sloman, Inside and Outside Probability Judgement, in BLACKWELL HANDBOOK OF JUDGMENT AND DECISION MAKING 155 (Derek Koehler & Nigel Harvey eds., 2004) (reviewing this phenomenon in the context of probability judgment).
(344.) Roger Buehler, Dale Griffin & Michael Ross, Exploring the “Planning Fallacy”: Why People Underestimate Their Task Completion Times, 67 J. PERSONALITY & SOC. PSYCHOL. 366 (1994).
(345.) Roger Buehler, Dale Griffin & Heather MacDonald, The Role of Motivated Reasoning in Optimistic Time Predictions, 23 PERSONALITY & SOC. PSYCHOL. BULL. 238 (1997). See also Buehler, Griffin & Peetz, supra note 342, at 27–31.
(346.) Mario Weick & Ana Guinote, How Long Will It Take? Power Biases Time Predictions, 46 J. EXPERIMENTAL SOC. PSYCHOL. 595 (2010).
(347.) See supra pp. 30–31.
(348.) On the fundamental attribution error, see supra pp. 68–69.
(351.) Stephanie P. Pezzo, Mark V. Pezzo & Eric R. Stone, The Social Implications of Planning, 42 J. EXPERIMENTAL SOC. PSYCHOL. 221 (2006).
(354.) See supra pp. 68–69.
(355.) Ellen J. Langer, The Illusion of Control, 32 J. PERSONALITY & SOC. PSYCHOL. 311 (1975).
(356.) John H. Fleming & John M. Darley, Perceiving Choice and Constraint: The Effects of Contextual and Behavioral Cues on Attitude Attribution, 56 J. PERSONALITY & SOC. PSYCHOL. 27 (1989).
(357.) James N. Henslin, Craps and Magic, 73 AM. J. SOC. 316 (1967).
(358.) Paul K. Presson & Victor A. Benassi, Illusion of Control: A Meta-analytic Review, 11 J. SOC. BEHAV. & PERSONALITY 493 (1996).
(360.) Francesca Gino, Zachariah Sharek & Don A. Moore, Keeping the Illusion of Control under Control: Ceilings, Floors, and Imperfect Calibration, 114 ORG. BEHAVIOR & HUM. DECISION PROCESSES 104 (2011).
(361.) For an overview of behavioral ethics, see Yuval Feldman, Behavioral Ethics Meets Behavioral Law and Economics, in THE OXFORD HANDBOOK OF BEHAVIORAL ECONOMICS AND THE LAW, supra note 8, at 213. See also Jennifer J. Kish-Gephart, David A. Harrison & Linda Klebe Treviño, Bad Apples, Bad Cases, and Bad Evidence about Sources of Unethical Decisions at Work, 95 J. APPLIED PSYCHOL. 1 (2010); Max H. Bazerman & Francesca Gino, Behavioral Ethics: Toward a Deeper Understanding of Moral Judgment and Dishonesty, 8 ANN. REV. L. & SOC. SCI. 85 (2012). See also infra pp. 455–61.
(362.) See supra pp. 21–23.
(363.) See supra pp. 58–61.
(365.) Don A. Moore & George Loewenstein, Self-Interest, Automaticity, and the Psychology of Conflict of Interest, 17 SOC. JUST. RES. 189, 189 (2004). See also Nicholas Epley & Eugene M. Caruso, Egocentric Ethics, 17 SOC. JUST. RES. 171 (2004); Brent L. Hughes & Jamil Zaki, The Neuroscience of Motivated Cognition, 19 TRENDS COGNITIVE SCI. 62 (2015).
(366.) Don A. Moore, Lloyd Tanlu & Max H. Bazerman, Conflict of Interest and the Intrusion of Bias, 5 JUDGMENT & DECISION MAKING 37 (2010).
(367.) Shaul Shalvi, Ori Eldar & Yoella Bereby-Meyer, Honesty Requires Time (and Lack of Justifications), 23 PSYCHOL. SCI. 23:1264 (2012).
(370.) David G. Rand, Joshua D. Greene & Martin A. Nowak, Spontaneous Giving and Calculated Greed, 489 NATURE 427 (2012); Eliran Halali, Yoella Bereby-Meyer & Nachshon Meiran, Between Rationality and Reciprocity: The Social Bright Side of Self-Control Failure, 143 J. EXPERIMENTAL PSYCHOL.: GENERAL 745 (2014).
(371.) David M. Bersoff, Why Good People Sometimes Do Bad Things: Motivated Reasoning and Unethical Behavior, 25 PERSONALITY & SOC. PSYCHOL. BULL. 28 (1999); Max H. Bazerman, George Loewenstein & Don A. Moore, Why Good Accountants Do Bad Audits, 80 HARV. BUS. REV. 96 (2002); Nina Mazar, On Amir & Dan Ariely, The Dishonesty of Honest People: A Theory of Self-Concept Maintenance, 45 J. MARKETING RES. 633 (2008).
(372.) C. Daniel Batson et al., Moral Hypocrisy: Appearing Moral to Oneself without Being So, 77 J. PERSONALITY & SOC. PSYCHOL. 525 (1999).
(376.) Other experiments have similarly demonstrated that increasing the saliency of dishonesty reduces cheating. See Francesca Gino, Shahar Ayal & Dan Ariely, Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel, 20 PSYCHOL. SCI. 393 (2009).
(377.) Jason Dana, Roberto A. Weber & Jason Xi Kuang, Exploiting Moral Wiggle Room: Experiments Demonstrating an Illusory Preference for Fairness, 33 ECON. THEORY 67 (2007).
(378.) Mazar, Amir & Ariely, supra note 371; Shaul Shalvi et al., Justified Ethicality: Observing Desired Counterfactuals Modifies Ethical Perceptions and Behavior, 115 ORG. BEHAV. & HUM. DECISION PROCESSES 181 (2011).
(379.) Ann E. Tenbrunsel & David M. Messick, Ethical Fading: The Role of Self-Deception in Unethical Behavior, 17 SOC. JUST. RES. 223 (2004).
(380.) When ethical degradation occurs gradually rather than abruptly, it is also less likely to be noticed by others, including those whose role is to monitor the behavior of the actors. See Francesca Gino & Max H. Bazerman, When Misconduct Goes Unnoticed: The Acceptability of Gradual Erosion in Others’ Unethical Behavior, 45 J. EXPERIMENTAL PSYCHOL. 708 (2009).
(381.) Albert Bandura, Moral Disengagement in the Perpetration of Inhumanities, 3 PERSONALITY & SOC. PSYCHOL. REV. 193 (1999).
(382.) Id.; Celia Moore et al., Why Employees Do Bad Things: Moral Disengagement and Unethical Organizational Behavior, 65 PERSONNEL PSYCHOL. 1 (2012). See also Shahar Ayal & Francesca Gino, Honest Rationales for Dishonest Behavior, in THE SOCIAL PSYCHOLOGY OF MORALITY: EXPLORING THE CAUSES OF GOOD AND EVIL 149 (Mario Mikulincer & Phillip R. Shaver eds., 2012).
(383.) Francesca Gino, Shahar Ayal & Dan Ariely, Self-Serving Altruism? The Lure of Unethical Actions That Benefit Others, 93 J. ECON. BEHAV. & ORG. 285 (2013).
(384.) See infra pp. 102–04.
(385.) Leigh Thompson & George Loewenstein, Egocentric Interpretations of Fairness and Interpersonal Conflict, 51 ORG. BEHAV. & HUM. DECISION PROCESSES 176 (1992).
(394.) Francesca Gino & Adam D. Galinsky, Vicarious Dishonesty: When Psychological Closeness Creates Distance from One’s Moral Compass, 119 ORG. BEHAV. & HUM. DECISION PROCESSES 15 (2012).
(395.) Ori Weisel & Shaul Shalvi, The Collaborative Roots of Corruption, 112 PROC. NAT’L ACAD. SCI. USA 10651 (2015).
(397.) See, e.g., Robert Shapley & R. Clay Reid, Contrast and Assimilation in the Perception of Brightness, 82 PROC. NAT’L ACAD. SCI. USA 5983 (1985); Edward H. Adelson, Perceptual Organization and the Judgment of Brightness, 262 SCI. 2042 (1993).
(398.) See supra pp. 42–57.
(400.) See infra p. 104.
(401.) See infra pp. 94–101, 187–88, 194–95.
(402.) Norbert Schwarz & Herbert Bless, Scandals and the Public’s Trust in Politicians: Assimilation and Contrast Effects, 18 PERSONALITY & SOC. PSYCHOL. BULL. 574 (1992).
(403.) Stan Morse & Kenneth J. Gergen, Social Comparison, Self-Consistency, and the Concept of Self, 16 J. PERSONALITY & SOC. PSYCHOL. 148 (1970).
(405.) Paul M. Herr, Consequences of Priming: Judgment and Behavior, 51 J. PERSONALITY & SOC. PSYCHOL. 1106 (1986).
(407.) Marilynn B. Brewer & Joseph G. Weber, Self-Evaluation Effects of Interpersonal versus Intergroup Social Comparison, 66 J. PERSONALITY & SOC. PSYCHOL. 268 (1994).
(408.) Mussweiler, supra note 396. See also Norbert Schwarz & Herbert Bless, Assimilation and Contrast Effects in Attitude Measurement: An Inclusion/Exclusion Model, 19 ADVANCES CONSUMER RES. 72 (1992).
(409.) Jonathon D. Brown et al., When Gulliver Travels: Social Context, Psychological Closeness, and Self-Appraisals, 62 J. PERSONALITY & SOC. PSYCHOL. 717, 722–25 (1992).
(410.) Daniel Kahneman & Dale T. Miller, Norm Theory: Comparing Reality to Its Alternatives, 93 PSYCHOL. REV. 136 (1986).
(411.) See, e.g., Herr, supra note 405; E. Tory Higgins & C. Miguel Brendl, Accessibility and Applicability: Some Activation Rules Influencing Judgment, 31 J. EXPERIMENTAL SOC. PSYCHOL. 218 (1995); Nira Liberman, Jens Förster & E. Tory Higgins, Completed vs. Interrupted Priming: Reduced Accessibility from Post-Fulfillment Inhibition, 43 J. EXPERIMENTAL SOC. PSYCHOL. 258 (2007).
(412.) See generally Jens Förster & Nira Liberman, Knowledge Activation, in SOCIAL PSYCHOLOGY: HANDBOOK OF BASIC PRINCIPLES 201 (Arie W. Kruglansky & E. Tory Higgins eds., 2d ed. 2007).
(413.) David E. Meyer & Roger W. Schvaneveldt, Facilitation in Recognizing Pairs of Words: Evidence of a Dependence between Retrieval Operations, 90 J. EXPERIMENTAL PSYCHOL. 227 (1971).
(414.) E. Tory Higgins, William S. Rholes & Carl R. Jones, Category Accessibility and Impression Formation, 13 J. EXPERIMENTAL SOC. PSYCHOL. 141 (1977).
(418.) Daniel M. Oppenheimer, Robyn A. LeBoeuf & Noel T. Brewer, Anchors Aweigh: A Demonstration of Cross-Modality Anchoring and Magnitude Priming, 106 COGNITION 13, 16–17 (2008).
(419.) Fritz Strack & Thomas Mussweiler, Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility, 73 J. PERSONALITY & SOC. PSYCHOL. 437, 439–40 (1997).
(420.) Timothy D. Wilson et al., A New Look at Anchoring Effects: Basic Anchoring and Its Antecedents, 125.4 J. EXPERIMENTAL PSYCHOL. 387, 390–92 (1996).
(422.) KAHNEMAN, supra note14, at 125.
(423.) Dan Ariely, George Loewenstein & Drazen Prelec, “Coherent Arbitrariness”: Stable Demand Curves without Stable Preferences, 118 Q.J. ECON. 73 (2006).
(424.) Robyn A. LeBoeuf & Eldar Shafir, The Long and Short of It: Physical Anchoring Effects, 19 J. BEHAV. DECISION MAKING 393 (2006).
(426.) See Birte Englich, Thomas Mussweiler & Fritz Strack, Playing Dice with Criminal Sentences: The Influence of Irrelevant Anchors on Experts’ Judicial Decision Making, 32 PERSONALITY SOC. PSYCHOL. BULL. 188, 194–95 (2006) (die toss); Ariely, Loewenstein & Prelec, supra note 423, at 75–77 (last two digits of participants’ social security number).
(428.) For a more detailed review of the theory and the studies supporting it, see KAHNEMAN, supra note14, at 120–22. For a critical review of the theory, see Gretchen B. Chapman & Eric J. Johnson, Incorporating the Irrelevant, in HEURISTICS AND BIASES, supra note 14, at 120, 127–30.
(430.) See Shane W. Frederick & Daniel Mochon, A Scale Distortion Theory of Anchoring, 141 J. EXPERIMENTAL PSYCHOL. 124 (2012).
(431.) See Timothy D. Wilson et al., A New Look at Anchoring Effects: Basic Anchoring and Its Antecedents, 125 J. EXPERIMENTAL PSYCHOL. 387, 395–97 (1996).
(433.) Gregory B. Northcraf & Margaret A. Neale, Experts, Amateurs, and Real Estate: An Anchoring-and-Adjustment Perspective on Property Pricing Decisions, 39 ORG. BEHAV. & HUM. DECISION PROCESSES 84, 87–94 (1987).
(435.) Thomas Mussweiler, Fritz Strack & Tim Pfeiffer, Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility, 26 PERSONALITY & SOC. PSYCHOL. BULL. 1142 (2000). On this debiasing technique, see infra pp. 135–36.
(436.) Solomon E. Asch, Forming Impressions of Personality, 43 J. ABNORMAL & SOC. PSYCHOL. 258, 270–72 (1946).
(437.) For a collection of studies on this subject, see CONTEXT EFFECTS IN SOCIAL AND PSYCHOLOGICAL RESEARCH 5–218 (Norbert Schwartz & Seymour Sudman eds., 1992).
(438.) Eric R. Igou & Herbert Bless, Conversational Expectations as a Basis for Order Effects in Persuasion, 26 J. LANGUAGE & SOC. PSYCHOL. 260 (2007).
(439.) Adrian Furnham, The Robustness of the Recency Effect: Studies Using Legal Evidence, 113 J. GENERAL PSYCHOL. 351 (1986); José H. Kerstholt & Janet L. Jackson, Judicial Decision Making: Order of Evidence Presentation and Availability of Background Information, 12 APPLIED COGNITIVE PSYCHOL. 445 (1998). See also infra pp. 532–33.
(440.) Alison Hubbard Ashton & Robert H. Ashton, Sequential Belief Revision in Auditing, 63 ACCOUNTING REV. 623 (1988); Richard M. Tubbs, William F. Messier, Jr. & W. Robert Knechel, Recency Effects in the Auditor’s Belief-Revision Process, 65 ACCOUNTING REV. 452 (1990).
(441.) Eric Schwitzgebel & Fiery Cushman, Expertise in Moral Reasoning? Order Effects on Moral Judgment in Professional Philosophers and Non‐philosophers, 27 MIND & LANGUAGE 135 (2012). Order effects are also manifested when people memorize a list of items. They tend to remember better items at the beginning and at the end of the list, rather than in the middle.
(442.) See supra pp. 58–61.
(444.) Robin M. Hogarth & Hillel J. Einhorn, Order Effects in Belief Updating: The Belief-Adjustment Model, 24 COGNITIVE PSYCHOL. 1 (1992). On subsequent studies, see, e.g., Jane Kennedy, Debiasing Audit Judgment with Accountability: A Framework and Experimental Results, 31 J. ACCOUNTING RES. 231, 235–36 (1993).
(448.) Andrew D. Cuccia & Gary A. McGill, The Role of Decision Strategies in Understanding Professionals’ Susceptibility to Judgment Biases, 38 J. ACCOUNTING RES. 419 (2000).
(449.) Philip M.J. Reckers & Joseph J. Schultz, Jr., The Effects of Fraud Signals, Evidence Order, and Group-Assisted Counsel on Independent Auditor Judgment, 5 BEHAV. RES. ACCOUNTING 124 (1993).
(450.) Itamar Simonson & Amos Tversky, Choice in Context: Tradeoff Contrast and Extremeness Aversion, 29 J. MARKETING RES. 281 (1992).
(451.) Kaisa Herne, Decoy Alternatives in Policy Choices: Asymmetric Domination and Compromise Effects, 13 EUR. J. POL. ECON. 575 (1997).
(452.) Mark Kelman, Yuval Rottenstreich & Amos Tversky, Context-Dependence in Legal Decision Making, 25 J. LEGAL STUD. 287 (1996); infra pp. 532–34.
(453.) Joel Huber, John W. Payne & Christopher Puto, Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis, 9 J. CONSUMER RES. 90 (1982).
(454.) Simonson & Tversky, supra note 450, at 287. But see Shane Frederick, Leonard Lee & Ernest Baskin, The Limits of Attraction, 51 J. CONSUMER RES. 487, 498 (2014) (failing to replicate the pen experiment). Related phenomena refer to the effect of elimination of an option from a choice-set (in marketing and other spheres). See William Hedgcock, Akshay R. Rao & Haipeng Chen, Could Ralph Nader’s Entrance and Exit Have Helped Al Gore? The Impact of Decoy Dynamics on Consumer Choice, 46 J. MARKETING RES. 330 (2009).
(455.) Itamar Simonson, Choice Based on Reasons: The Case of Attraction and Compromise Effects, 16 J. CONSUMER RES. 158 (1989). See also Nathan Novemsky et al., Preference Fluency in Choice, 44 J. MARKETING RES. 347 (2007) (finding an increasing tendency to opt for the compromise option when the choice task is experienced as more difficult).
(458.) Ravi Dhar, Stephen M. Nowlis & Steven J. Sherman, Trying Hard or Hardly Trying: An Analysis of Context Effects in Choice, 9 J. CONSUMER PSYCHOL. 189 (2000). The evidence regarding the effect of time constraints on the attraction effect is mixed. See Lisa D. Ordóñez, Lehman Benson III & Andrea Pittarello, Time‐Pressure Perception and Decision Making, in 2 WILEY BLACKWELL HANDBOOK, supra note 2, at 519, 531.
(459.) Birger Wernerfelt, A Rational Reconstruction of the Compromise Effect: Using Market Data to Infer Utilities, 21 J. CONSUMER RES. 627 (1995); Emir Kamenica, Contextual Inference in Markets: On the Informational Content of Product Lines, 98 AM. ECON. REV. 2127 (2008).
(460.) Kamenica, supra note 459; Kathryn M. Sharpe, Richard Staelin & Joel Huber, Using Extremeness Aversion to Fight Obesity: Policy Implications of Context Dependent Demand, 35 J. CONSUMER RES. 406 (2008). See also infra pp. 294–95.
(462.) See supra pp. 42–44 and 34, respectively.
(463.) Stephen M. Nowlis & Itamar Simonson, The Effect of New Product Features on Brand Choice, 33 J. MARKETING RES. 36 (1996).
(464.) Thaler, supra note 197, at 50–51 (1980); Kahneman & Tversky, supra note 161, at 347. Similar results were obtained in survey experiments pertaining to saving time (rather than money), in riskless—but not in risky—decision problems. See France Leclerc, Bernd H. Schmitt & Laurette Dubé, Waiting Time and Decision Making: Is Time Like Money?, 22 J. CONSUMER RES. 110 (1995).
(465.) Joseph C. Nunes & C. Whan Park, Incommensurate Resources: Not Just More of the Same, 40 J. MARKETING RES. 26 (2003); Peter Jarnebrant, Olivier Toubia & Eric Johnson, The Silver Lining Effect: Formal Analysis and Experiments, 55 MGMT. SCI. 1832 (2009).
(466.) Ran Kivetz, Oleg Urminsky & Yuhuang Zheng, The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention, 43 J. MARKETING RES. 39 (2006).
(467.) Charles M. Brooks, Patrick J. Kaufmann & Donald R. Lichtenstein, Travel Configuration on Consumer Trip‐Chained Store Choice Source, 31 J. CONSUMER RES. 241 (2004); M. Brooks, Patrick J. Kaufmann & Donald R. Lichtenstein, Trip Chaining Behavior in Multi-destination Shopping Trips: A Field Experiment and Laboratory Replication, 84 J. RETAILING 29 (2008). In the same vein, diminishing sensitivity with regard to temporal distances is associated with myopia. See infra pp. 88–93.
(468.) See, e.g., Paul Slovic, “If I Look at the Mass I Will Never Act”: Psychic Numbing and Genocide, 2 JUDGMENT & DECISION MAKING 79 (2007).
(469.) Christine Jolls, Cass R. Sunstein & Richard Thaler, A Behavioral Approach to Law and Economics, 50 STAN. L. REV. 1471, 1476–79 (1998).
(470.) Piers Steel, The Nature of Procrastination: A Meta-analytic and Theoretical Review of Quintessential Self-Regulatory Failure, 133 PSYCHOL. BULL. 65, 66 (2007).
(471.) Amos Tversky & Eldar Shafir, Choice under Conflict: The Dynamics of Deferred Decision, 3 PSYCHOL. SCI. 358, 361 (1992).
(473.) According to a prevailing notion in personality psychology, there are five basic dimensions of personality: extraversion, agreeableness, openness, conscientiousness, and neuroticism. See generally John M. Digman, Personality Structure: Emergence of the Five-Factor Model, 41 ANN. REV. PSYCHOL. 417 (1990); Lewis R. Goldberg, An Alternative “Description of Personality”: The Big-Five Factor Structure, 59 J. PERSONALITY & SOC. PSYCHOL. 1216 (1990).
(477.) See infra pp. 88–93.
(480.) Eyal Zamir, Daphna Lewinsohn-Zamir & Ilana Ritov, It’s Now or Never! Using Deadlines as Nudges, 42 LAW & SOC. INQUIRY 769 (2017).
(481.) M. Susan Roberts & George B. Semb, Analysis of the Number of Student-Set Deadlines in a Personalized Psychology Course, 17 TEACHING PSYCHOL.170 (1990).
(482.) Dan Ariely & Klaus Wertenbroch, Procrastination, Deadlines, and Performance: Self-Control by Precommitment, 13 PSYCHOL. SCI. 219 (2002).
(483.) Gabriel D. Carroll et al., Optimal Defaults and Active Decisions, 124 Q.J. ECON. 1639 (2009). See also Punam Anand Keller et al., Enhanced Active Choice: A New Method to Motivate Behavior Change, 21 J. CONSUMER PSYCHOL. 376 (2011) (describing randomized laboratory and field studies showing that mandated active choice increases the willingness to vaccinate).
(484.) See generally Shane Frederick, George Loewenstein & Ted O’Donoghue, Time Discounting and Time Preference: A Critical Review, 40 J. ECON. LITERATURE 351 (2002); Oleg Urminsky & Gal Zauberman, The Psychology of Intertemporal Preferences, in WILEY BLACKWELL HANDBOOK, supra note 2, at 141. See also TIME AND DECISION: ECONOMIC AND PSYCHOLOGICAL PERSPECTIVES ON INTERTEMPORAL CHOICE (George Loewenstein, Daniel Read & Roy Baumeister eds., 2003).
(486.) See, e.g., MICHAEL R. GOTTFREDSON & TRAVIS HIRSCHI, A GENERAL THEORY OF CRIME (1990); Travis C. Pratt & Francis T. Cullen, The Empirical Status of Gottfredson and Hirschi’s General Theory of Crime: A Meta-analysis, 38 CRIMINOLOGY 931 (2000).
(487.) Paul A. Samuelson, A Note on Measurement of Utility, 4 REV. ECON. STUD. 155 (1937).
(488.) R.H. Strotz, Myopia and Inconsistency in Dynamic Utility Maximization, 23 REV. ECON. STUD. 165 (1955–1956); George Ainslie, Specious Reward: A Behavioral Theory of Impulsiveness and Impulse Control, 82 PSYCHOL. BULL. 463 (1975); David Laibson, Golden Eggs and Hyperbolic Discounting, 112 Q.J. ECON. 443 (1997).
(489.) Uri Benzion, Amnon Rapoport & Joseph Yagil, Discount Rates Inferred from Decisions: An Experimental Study, 35 MGMT. SCI. 270 (1989); Frederick, Loewenstein & O’Donoghue, supra note 484, at 390–93; Urminsky & Zauberman, supra note 484, at 147–52.
(490.) See, e.g., Richard Thaler, Some Empirical Evidence on Dynamic Inconsistency, 8 ECON. LETTERS 201 (1981).
(492.) George F. Loewenstein, Frames of Mind in Intertemporal Choice, 34 MGMT. SCI. 200 (1988).
(493.) See, e.g., George Loewenstein & Nachum Sicherman, Do Workers Prefer Increasing Wage Profiles?, 9 J. LABOR ECON. 67 (1991).
(494.) Gretchen B. Chapman, Preferences for Improving and Declining Sequences of Health Outcomes, 13 J. BEHAV. DECISION MAKING 203 (2000).
(495.) George Loewenstein & Drazen Prelec, Anomalies in Intertemporal Choice: Evidence and an Interpretation, 107 Q.J. ECON. 573 (1992). On prospect theory, see supra pp. 42–57.
(496.) Margaret C. Campbell & Caleb Warren, The Progress Bias in Goal Pursuit: When One Step Forward Seems Larger than One Step Back, 41 J. CONSUMER RES. 1316 (2015).
(497.) For an overview, see Walter Mischel, Yuichi Shoda & Monica L. Rodriguez, Delay of Gratification in Children, 244 SCI. 933 (1989).
(498.) Yuichi Shoda, Walter Mischel & Philip K. Peake, Predicting Adolescent Cognitive and Self-Regulatory Competencies from Preschool Delay of Gratification: Identifying Diagnostic Conditions, 26 DEVELOPMENTAL PSYCHOL. 978 (1990).
(499.) Ozlem Ayduk et al., Regulating the Interpersonal Self: Strategic Self-Regulation for Coping with Rejection Sensitivity, 79 J. PERSONALITY & SOC. PSYCHOL. 776 (2000).
(500.) Stian Reimers et al., Associations between a One‐Shot Delay Discounting Measure and Age, Income, Education and Real‐World Impulsive Behavior, 47 PERSONALITY & INDIVIDUAL DIFFERENCES 973 (2009).
(501.) George Loewenstein, Out of Control: Visceral Influences on Behavior, 65 ORG. BEHAV. & HUM. DECISION PROCESSES 272 (1996). Cf. Richard H. Thaler & H.M. Shefrin, An Economic Theory of Self-Control, 89 J. POL. ECON. 392 (1981).
(502.) Baba Shiv & Alexander Fedorikhin, Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making, 26 J. CONSUMER RES. 278 (1999).
(503.) See, e.g., Yaacov Trope & Nira Liberman, Construal-Level Theory of Psychological Distance, 117 PSYCHOL. REV. 440 (2010); Kentaro Fujita, Yaacov Trope & Nira Liberman, On the Psychology of Near and Far, in WILEY BLACKWELL HANDBOOK, supra note 2, at 404.
(504.) Selin A. Malkoc & Gal Zauberman, Deferring versus Expediting Consumption: The Effect of Outcome Concreteness on Sensitivity to Time Horizon, 43 J. MARKETING RES. 618 (2006).
(505.) Ariel Rubinstein, “Economics and Psychology”? The Case of Hyperbolic Discounting, 44 INT’L ECON. REV. 1207 (2003); Keith M. Marzilli Ericson et al., Money Earlier or Later? Simple Heuristics Explain Intertemporal Choices Better than Delay Discounting Does, 26 PSYCHOL. SCI. 826 (2015).
(506.) Daniel M. Bartels & Oleg Urminsky, On Intertemporal Selfishness: The Perceived Instability of Identity Underlies Impatient Consumption, 38 J. CONSUMER RES. 182 (2011).
(511.) Robert L. Scharff, Obesity and Hyperbolic Discounting: Evidence and Implications, 32 J. CONSUMER POL’Y 3 (2009); Shinsuke Ikedaa, Myong-Il Kang & Fumio Ohtake, Hyperbolic Discounting, the Sign Effect, and the Body Mass Index, 29 J. HEALTH ECON. 268 (2010).
(512.) Warren K. Bickel, Amy L. Odum & Gregory J. Madden, Impulsivity and Cigarette Smoking: Delay Discounting in Current, Never, and Ex-smokers, 146 PSYCHOPHARMACOLOGY 447 (1999).
(513.) Warren K. Bickel & Lisa A. Marsch, Toward a Behavioral Economic Understanding of Drug Dependence: Delay Discounting Processes, 96 ADDICTION 73 (2001).
(514.) See infra pp. 180, 184, 379–80.
(515.) Stefano DellaVigna & Ulrike Malmendier, Contract Design and Self-Control: Theory and Evidence, 119 Q.J. ECON. 353 (2004); Klaus Wertenbroch, Self-Rationing: Self-Control in Consumer Choice, in TIME AND DECISION, supra note 484, at 491. On firms’ exploitation of consumer biases, see generally infra pp. 281–324.
(516.) See, e.g., IAN AYRES, CARROTS AND STICKS: UNLOCK THE POWER OF INCENTIVES TO GET THINGS DONE (2010).
(517.) Brigitte Madrian & Dennis Shea, The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior, 66 Q.J. ECON. 1149 (2001).
(518.) Richard H. Thaler & Shlomo Benartzi, Save More Tomorrow™: Using Behavioral Economics to Increase Employee Saving, 112 J. POL. ECON. S164 (2004). See also infra p. 180.
(519.) See infra pp. 162–85.
(521.) Another important aspect of human ethicality, namely the mechanisms allowing ordinary people to violate ethical norms, are discussed under the heading of behavioral ethics (supra pp. 72–76; infra pp. 455–61), and additional issues will be discussed throughout the book apropos of specific legal issues.
(522.) See generally supra pp. 13–14.
(523.) JOHN RAWLS, A THEORY OF JUSTICE 26 (rev. ed. 1999).
(524.) See generally SHELLY KAGAN, NORMATIVE ETHICS 161–70 (1998).
(525.) Compared to other consequentialist theories, the over-demandingness objection is less applicable to economic analysis, because it assumes that under a relatively broad range of circumstances, the best way to maximize overall welfare is by each person rationally pursuing his or her own interests. Economic analysis rarely, if ever, suggests that people should strive to maximize overall utility.
(526.) See KAGAN, supra note 524, at 84–94, 106–52; Christopher McMahon, The Paradox of Deontology, 20 PHIL. & PUB. AFF. 350, 354–68 (1991); Stephen Darwall, Introduction, in DEONTOLOGY 1 (Stephen Darwall ed., 2003).
(527.) THOMAS NAGEL, THE VIEW FROM NOWHERE 176 (1986). On deontological notions of fairness, see also FRANCES M. KAMM, MORALITY, MORTALITY, Vol. I: DEATH AND WHOM TO SAVE FROM IT 76 (1993); Iwao Hirose, Aggregation and Numbers, 16 UTILITAS 62 (2004).
(528.) David Enoch, Intending, Foreseeing, and the State, 13 LEGAL THEORY 69, 97–99 (2007).
(529.) A useful collection of studies of the doing/allowing distinction is KILLING AND LETTING DIE (Bonnie Steinbock & Alastair Norcross eds., 2d ed. 1994).
(531.) See, e.g., PHILIPPA FOOT, The Problem of Abortion and the Doctrine of the Double Effect, in VIRTUES AND VICES AND OTHER ESSAYS IN MORAL PHILOSOPHY 19, 23 (1978); Judith Jarvis Thomson, The Trolley Problem, 94 YALE L.J. 1395 (1985); Alison McIntyre, Doing Away with Double Effect, 111 ETHICS 219 (2001).
(532.) See generally EYAL ZAMIR & BARAK MEDINA, LAW, ECONOMICS, AND MORALITY 41–56, 79–104 (2010).
(533.) See, e.g., Judith Jarvis Thomson, Some Ruminations on Rights, in RIGHTS, RESTITUTION, AND RISK 49 (William Parent ed., 1986); Samantha Brennan, Thresholds for Rights, 33 SOUTHERN J. PHIL. 143 (1995); KAGAN, supra note 524, at 78–80.
(535.) See, e.g., Charles Fried, The Value of Life, 82 HARV. L. REV. 1415 (1969); Mark Kelman, Saving Lives, Saving from Death, Saving from Dying: Reflection on “Over-Valuing” Identifiable Victims, 11 YALE J. HEALTH POL’Y. L. & ETHICS 51 (2011).
(536.) See, e.g., Samuel Scheffler, Introduction, in CONSEQUENTIALISM AND ITS CRITICS 1, 9 (Samuel Scheffler ed. 1988); Samantha Brennan, Thresholds for Rights, 33 SOUTHERN J. PHIL. 143, 145 (1995); KAGAN, supra note 530, at 1–5.
(537.) On this and other attempts “to consequentialize” deontology, see, e.g., ZAMIR & MEDINA, supra note 532, at 21–40 (2010); DOUGLAS W. PORTMORE, COMMONSENSE CONSEQUENTIALISM: WHEREIN MORALITY MEETS RATIONALITY (2011); Tom Dougherty, Agent-Neutral Deontology, 163 PHIL. STUD. 527 (2013).
(538.) See, e.g., Jonathan Baron & Mark Spranca, Protected Values, 70 ORG. BEHAV. & HUM. DECISION PROCESSES 1 (1997); Ilana Ritov & Jonathan Baron, Protected Values and Omission Bias, 79 ORG. BEHAV. & HUM. DECISION PROCESSES 79 (1999). For an overview, see Daniel M. Bartels et al., Moral Judgment and Decision Making, in WILEY BLACKWELL HANDBOOK, supra note 2, at 478, 483–87.
(539.) Jonathan Baron & Sara Leshner, How Serious Are Expressions of Protected Values?, 6 J. EXPERIMENTAL PSYCHOL.: APPLIED 183 (2000).
(540.) See Daniel M. Bartels & Douglas L. Medin, Are Morally Motivated Decision Makers Insensitive to the Consequences of Their Choices?, 18 PSYCHOL. SCI. 24, 24 (2007).
(542.) Michael R. Waldmann, Jonas Nagel & Alex Wiegmann, Moral Judgments, in THE OXFORD HANDBOOK OF THINKING AND REASONING, supra note 21, at 364, 383. See also GUIDO CALABRESI & PHILIP BOBBIT, TRAGIC CHOICES (1978); GUIDO CALABRESI, THE FUTURE OF LAW AND ECONOMICS (2016).
(543.) For an experimental demonstration of this point, see Philip E. Tetlock, Coping with Trade-Offs: Psychological Constraints and Political Implications, in ELEMENTS OF REASON: COGNITION, CHOICE, AND THE BOUNDS OF RATIONALITY 239, 254–55 (S. Lupia et al. eds., 2000).
(544.) See, e.g., Ilana Ritov & Jonathan Baron, Reluctance to Vaccinate: Omission Bias and Ambiguity, 3 J. BEHAV. DECISION MAKING 263 (1990); Mark Spranca, Elisa Minsk & Jonathan Baron, Omission and Commission in Judgment and Choice, 27 J. EXPERIMENTAL SOC. PSYCHOL. 76 (1991); Peter DeScioli, John Christner & Robert Kurzban, The Omission Strategy, 22 PSYCHOL. SCI. 442 (2011). On the omission bias, see also supra pp. 48–50.
(546.) Fiery Cushman, Liane Young & Marc Hauser, The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm, 17 PSYCHOL. SCI. 1082, 1086 (2006).
(547.) JOHN MIKHAIL, ELEMENTS OF MORAL COGNITION 77–85, 319–60 (2011). For comparable findings, see Marc Hauser et al., A Dissociation between Moral Judgments and Justifications, 22 MIND & LANGUAGE 1 (2007); Cushman, Young & Hauser, supra note 546.
(549.) Adam B. Moore, Brian A. Clark & Michael J. Kane, Who Shalt Not Kill? Individual Differences in Working Memory Capacity, Executive Control, and Moral Judgment, 19 PSYCHOL. SCI. 549 (2008).
(550.) Lewis Petrinovich, Patricia O’Neill & Matthew Jorgensen, An Empirical Study of Moral Intuitions: Toward an Evolutionary Ethics, 64 J. PERSONALITY & SOC. PSYCHOL. 467 (1993).
(551.) Shaun Nichols & Ron Mallon, Moral Dilemmas and Moral Rules, 100 COGNITION 530 (2006).
(552.) See, e.g., Daniel M. Bartels, Principled Moral Sentiment and the Flexibility of Moral Judgment and Decision Making, 108 COGNITION 381 (2008). See also Tage S. Rai & Keith J. Holyoak, Moral Principles or Consumer Preferences? Alternative Framings of the Trolley Problem, 34 COGNITIVE SCI. 311 (2010). While this study focused on other aspects of choices in the trolley problem, in all the reported experiments, under all conditions, people’s judgments were consistent with moderate deontology. Only a small minority of subjects expressed judgments that conformed to either consequentialism or absolutist deontology.
(553.) MARC D. HAUSER, MORAL MINDS: HOW NATURE DESIGNED OUR UNIVERSAL SENSE OF RIGHT AND WRONG 111–31 (2006).
(554.) Konika Banerjee, Bryce Huebner & Marc D. Hauser, Intuitive Moral Judgments Are Robust across Demographic Variation in Gender, Education, Politics, and Religion: A Large-Scale Web-Based Study, 10 J. COGNITION & CULTURE 253 (2010).
(555.) See, e.g., Joshua D. Greene et al., fMRI Investigation of Emotional Engagement in Moral Judgment, 293 SCI. 2105 (2001); Joshua D. Greene et al., The Neural Bases of Cognitive Conflict and Control in Moral Judgment, 44 NEURON 389 (2004). For an overview, see Bartels et al., supra note 538, at 488–90.
(556.) Michael Koenigs et al., Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments, 446 NATURE 908 (2007). See also Guy Kahane & Nicholas Shackel, Do Abnormal Responses Show Utilitarian Bias?, 452 NATURE E5 (2008); Michael Koenigs et al., Reply, 452 NATURE E5 (2008).
(559.) Michał Białek & Wim De Neys, Dual Processes and Moral Conflict: Evidence for Deontological Reasoners’ Intuitive Utilitarian Sensitivity, 12 JUDGMENT & DECISION MAKING 148 (2017).
(560.) Guy Kahane et al., The Neural Basis of Intuitive and Counterintuitive Moral Judgment, 7 SOC. COGNITIVE & AFFECTIVE NEUROSCI. 393 (2012).
(561.) Id.; Cushman, supra note 546; Fiery Cushman, Liane Young & Joshua D. Greene, Multi-system Moral Psychology, in JOHN M. DORIS AND THE MORAL PSYCHOLOGY RESEARCH GROUP, THE MORAL PSYCHOLOGY HANDBOOK 47 (2010); Jesse J. Prinz & Shaun Nichols, Moral Emotions, in THE MORAL PSYCHOLOGY HANDBOOK, id. at 111; Fiery Cushman et al., Judgment before Principle: Engagement of the Frontoparietal Control Network, in 7 SOC. COGNITIVE & AFFECTIVE NEUROSCI. 888 (2012); Daniel M. Bartels, Principled Moral Sentiment and the Flexibility of Moral Judgment and Decision Making, 108 COGNITION 381 (2008); Jana Schaich Borg et al., Consequences, Action, and Intention as Factors in Moral Judgments: An fMRI Investigation, 18 J. COGNITIVE NEUROSCI. 803 (2006).
(562.) For an overview, see Daphna Lewinsohn-Zamir, Ilana Ritov & Tehila Kogut, Law and Identifiability, 92 IND. L. REV. 505, 509–19 (2017).
(563.) For conflicting arguments in these debates, see, e.g., Cass R. Sunstein, Moral Heuristics, 28 BEHAV. & BRAIN SCI. 531 (2005) (the article is followed by twenty-four commentaries and the author’s response; see 28 BEHAV. & BRAIN SCI. 542–70 (2005)); Joshua D. Greene, The Secret Joke of Kant’s Soul, in MORAL PSYCHOLOGY, VOL. 3: THE NEUROSCIENCE OF MORALITY: EMOTION, BRAIN DISORDERS, AND DEVELOPMENT 35 (W. Sinnot-Armstrong ed., 2008); S. Matthew Liao, A Defense of Intuitions, 140 PHIL. STUD. 247 (2008); F.M. Kamm, Neuroscience and Moral Reasoning: A Note on Recent Research, 37 PHIL. & PUB. AFF. 330 (2009); Waldmann, Nagel & Wiegmann, supra note 542, at 373–74; Bartels et al., supra note 538, at 495–96.
(565.) See infra p. 436.
(566.) See, e.g., J. Stacy Adams, Inequality in Social Exchange, in 2 ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY 267 (1965); Elaine Walster, Ellen Berscheid & G. William Walster, New Directions in Equity Research, 25 J. PERSONALITY & SOC. PSYCHOL. 151 (1973). For overviews, see Linda J. Skitka & Daniel C. Wisneski, Justice Theory and Research: A Social Functionalist Perspective, in HANDBOOK OF PSYCHOLOGY, supra note 337, at 406, 407–10; John T. Jost & Aaron C. Kay, Social Justice: History, Theory, and Research, in 2 HANDBOOK OF SOCIAL PSYCHOLOGY, supra note 3, at 1122, 1130–33.
(567.) Jerald Greenber, Stealing in the Name of Justice: Informational and Interpersonal Moderators of Theft Reactions to Underpayment Inequity, 54 ORG. BEHAV. & HUM. DECISION PROCESSES 81 (1993).
(568.) Linda J. Skitka, Jennifer Winquist & Susan Hutchinson, Are Outcome Fairness and Outcome Favorability Distinguishable Psychological Constructs? A Meta-analytic Review, 16 SOC. JUST. RES. 309 (2003).
(569.) See, e.g., Hessel Oosterbeek, Randolph Sloof & Gijs van De Kuilen, Cultural Differences in Ultimatum Game Experiments: Evidence from a Meta-analysis, 7 EXPERIMENTAL ECON. 171 (2004).
(570.) For a general survey and analysis of the experimental data, see COLIN F. CAMERER, BEHAVIORAL GAME THEORY—EXPERIMENTS IN STRATEGIC INTERACTION 43–117 (2003).
(571.) Christoph Engel, Dictator Games: A Meta Study, 14 EXPERIMENTAL ECON. 583 (2011). See also infra pp. 106–10.
(572.) Daniel Kahneman, Jack L. Knetsch & Richard Thaler, Fairness as a Constraint on Profit Seeking: Entitlements in the Market, 76 AM. ECON. REV. 728 (1986).
(573.) Eyal Zamir & Ilana Ritov, Notions of Fairness and Contingent Fees, 74 LAW & CONTEMP. PROBS. 1 (2010); infra pp. 510–12.
(574.) Gerald S. Leventhal, Fairness in Social Relationships, in CONTEMPORARY TOPICS IN SOCIAL PSYCHOLOGY 211 (John W. Thibaut, Janet T. Spence & Robert C. Carson eds., 1976); Melvin J. Lerner, Dale T. Miller & J.G. Holmes, Deserving and the Emergence of Forms of Justice, in 9 ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY 133, 152–60 (1976); Helmut Lamm & Thomas Schwinger, Norms concerning Distributive Justice: Are Needs Taken into Consideration in Allocation Decisions?, 43 SOC. PSYCHOL. Q. 425 (1980).
(575.) See, e.g., Melvin J. Lerner, The Justice Motive: Some Hypotheses as to Its Origins and Forms, 45 J. PERSONALITY 1, 24–28 (1977).
(576.) See, e.g., Brenda Major & Jeffrey B. Adams, Role of Gender, Interpersonal Orientation, and Self-Presentation in Distributive-Justice Behavior, 45 J. PERSONALITY & SOC. PSYCHOL. 598 (1983).
(578.) See, e.g., Carol T. Kulik & Maureen L. Ambrose, Personal and Situational Determinants of Referent Choice, 17 ACAD. MGMT. REV. 212 (1992); Lisa D. Ordóñez, Terry Connolly & Richard Coughlan, Multiple Reference Points in Satisfaction and Fairness Assessment, 13 J. BEHAV. DECISION MAKING 329 (2000).
(579.) See, e.g., Menachem Yaari & Maya Bar-Hillel, On Dividing Justly, 1 SOC. CHOICE & WELFARE 1 (1984).
(580.) Valerio Capraro & David G. Rand, Do The Right Thing: Preferences for Moral Behavior, Rather than Equity or Efficiency Per Se, Drive Human Prosociality (working paper, Nov. 2017), available at: https://ssrn.com/abstract=2965067 ).
(581.) See generally Jost & Kay, supra note 566, at 410–14; Skitka & Wisneski, supra note 566, at 1140–42; Robert J. MacCoun, Voice, Control, and Belonging: The Double-Edged Sword of Procedural Fairness, 1 ANN. REV. L. & SOC. SCI. 171 (2005) (including implications for the law).
(582.) See, e.g., Robert Folger et al., Effects of “Voice” and Peer Opinions on Responses to Inequity, 45 J. PERSONALITY & SOC. PSYCHOL. 268 (1979) (resource allocation); Robert Folger et al., Elaborating Procedural Fairness: Justice Becomes Both Simpler and More Complex, 22 PERSONALITY & SOC. PSYCHOL. BULL. 435 (1996) (dispute resolution).
(583.) See, e.g., Tom T. Tyler & Robert Folger, Distributional and Procedural Aspects of Satisfaction with Citizen-Police Encounters, 1 BASIC & APPLIED SOC. PSYCHOL. 281 (1980).
(584.) JOHN THIBAUT & LAURENS WALKER, PROCEDURAL JUSTICE: A PSYCHOLOGICAL ANALYSIS (1975).
(585.) E. ALLAN LIND & TOM R. TYLER, THE SOCIAL PSYCHOLOGY OF PROCEDURAL FAIRNESS 230–41 (1988).
(586.) Debra L. Shapiro & Jeanne M. Brett, What Is the Role of Control in Organizational Justice?, in HANDBOOK OF ORGANIZATIONAL RESEARCH 155 (Jerald Greenberg & Jason A. Colquitt eds., 2005).
(587.) Kees van den Bos et al., How Do I Judge My Outcome when I Do Not Know the Outcome of Others? The Psychology of the Fair Process Effect, 72 J. PERSONALITY & SOC. PSYCHOL. 1034 (1997).
(588.) Ilse V. Grienberger, Christel G. Rutte & Ad F.M. van Knippenberg, Influence of Social Comparisons of Outcomes and Procedures on Fairness Judgments, 82 J. APPLIED PSYCHOL. 913 (1997).
(590.) See Jacinta M. Gau, Consent Searches as a Threat to Procedural Justice and Police Legitimacy: An Analysis of Consent Requests During Traffic Stops, 24 CRIM. JUS. POL’Y REV. 759 (2013); Tal Jonathan-Zamir, Badi Hasisi & Yoram Margalioth, Is It the What or the How? The Roles of High-Policing Tactics and Procedural Justice in Predicting Perceptions of Hostile Treatment: The Case of Security Checks at Ben-Gurion Airport, Israel, 50 LAW & SOC’Y REV. 608 (2016).
(591.) For overviews, see Melvin J. Lerner & Dale T. Miller, Just World Research and the Attribution Process: Looking Back and Ahead, 85 PSYCHOL. BULL. 1030 (1978); Adrian Furnham, Belief in a Just World: Research Progress over the Past Decade, 34 PERSONALITY & INDIVIDUAL DIFFERENCES 795 (2003); Jost & Kay, supra note 566, at 1136–38.
(592.) Gary Blasi & John T. Jost, System Justification Theory and Research: Implications for Law, Legal Advocacy, and Social Justice, 94 CAL. L. REV. 1119 (2006).
(593.) Melvin J. Lerner & Carolyn H. Simmons, Observer’s Reaction to the “Innocent Victim”: Compassion or Rejection?, 4 J. PERSONALITY & SOC. PSYCHOL. 203 (1966).
(594.) Carolyn L. Hafer, Do Innocent Victims Threaten the Belief in a Just World? Evidence from a Modified Stroop Task, 79 J. PERSONALITY & SOC. PSYCHOL. 165 (2000).
(595.) Carolyn L. Hafer & Laurent Bègue, Experimental Research on Just-World Theory: Problems, Developments, and Future Challenges, 131 PSYCHOL. BULL. 128 (2005).
(596.) C. Daniel Batson & Adam A. Powell, Altruism and Prosocial Behavior, in HANDBOOK OF PSYCHOLOGY, Vol. 5: PERSONALITY AND SOCIAL PSYCHOLOGY 463, 463 (Theodore Millon & Melvin J. Lerner eds., 2003).
(597.) See generally Mark Snyder & Allen M. Omoto, Volunteerism: Social Issues Perspectives and Social Policy Implications, 2 SOC. ISSUES & POL’Y REV. 1 (2008).
(598.) Mark Snyder & Patrick C. Dwyer, Altruism and Prosocial Behavior, in HANDBOOK OF PSYCHOLOGY, supra note 337, at 467, 467. For a lucid overview of behavioral-economics studies of cooperation, see Simon Gächter, Human Prosocial Motivation and the Maintenance of Social Order, in THE OXFORD HANDBOOK OF BEHAVIORAL ECONOMICS AND THE LAW, supra note 8, at 28.
(599.) For overviews and meta-analyses, see Bibb Latané & Steve Nida, Ten Years of Research on Group Size and Helping, 89 PSYCHOL. BULL. 308 (1981); JOHN F. DOVIDIO ET AL., THE SOCIAL PSYCHOLOGY OF PROSOCIAL BEHAVIOR 65–105 (2006); Peter Fischer et al., The Bystander-Effect: A Meta-analytic Review on Bystander Intervention in Dangerous and Non-dangerous Emergencies, 137 PSYCHOL. BULL. 517 (2011).
(600.) See, e.g., Louis A. Penner et al., Measuring the Prosocial Personality, in 10 ADVANCES IN PERSONALITY ASSESSMENT 147 (James N. Butcher & Charles D. Spielberger eds., 1995).
(602.) William G. Graziano & Nancy Eisenberg, Agreeableness: A Dimension of Personality, in HANDBOOK OF PERSONALITY PSYCHOLOGY 795 (Robert Hogan, John Johnson & Stephen Briggs eds., 1997).
(603.) See, e.g., Gian Vittorio Caprara, Guido Alessandri & Nancy Eisenberg, Prosociality: The Contribution of Traits, Values, and Self-Efficacy Beliefs, 102 J. PERSONALITY & SOC. PSYCHOL. 1289 (2012).
(604.) See, e.g., SAMUEL P. OLINER & PEARL M. OLINER, THE ALTRUISTIC PERSONALITY: RESCUERS OF JEWS IN NAZI EUROPE (1988); Penner et al., supra note 600; Caprara, Alessandri & Eisenberg, supra note 603.
(606.) Peter Salovey & David L. Rosenhan, Mood States and Prosocial Behavior, in HANDBOOK OF SOCIAL PSYCHOPHYSIOLOGY 371, 372–74 (Hugh Wagner & Antony Mansfield eds., 1989); Snyder & Dwyer, supra note 598, at 472.
(609.) See, e.g., Robert B. Cialdini et al., Empathy-Based Helping: Is It Selflessly or Selfishly Motivated?, 52 J. PERSONALITY & SOC. PSYCHOL. 749 (1987).
(611.) See, e.g., David A. Schroeder et al., Empathic Concern and Helping Behavior: Egoism or Altruism?, 24 J. EXPERIMENTAL SOC. PSYCHOL. 333 (1988); C. Daniel Batson et al., Negative-State Relief and the Empathy-Altruism Hypothesis, 56 J. PERSONALITY & SOC. PSYCHOL. 922 (1989); C. DANIEL BATSON, THE ALTRUISM QUESTION: TOWARD A SOCIAL-PSYCHOLOGICAL ANSWER (1991). For an overview of the debate, see DOVIDIO ET AL., supra note 599, at 118–43.
(615.) Yan Zhang & Nicholas Epley, Self-Centered Social Exchange: Differential Use of Costs versus Benefits in Prosocial Reciprocity, 97 J. PERSONALITY & SOC. PSYCHOL. 796 (2009).
(618.) See generally PAUL A.M. VAN LANGE ET AL., SOCIAL DILEMMAS: THE PSYCHOLOGY OF HUMAN COOPERATION (2014).
(619.) Tragedy of the commons denotes a situation in which a resource is open for use by many individuals, and overusing it results in its destruction, such as a pasture used for grazing. Since each user reaps the benefit of his or her use, but the costs are born collectively, in the absence of coordination, self-interested behavior is expected to harm everybody. A public good is a good that is non-excludable, that is, people cannot be effectively excluded from using it, and non-rivalrous, that is, its use by one person does not reduce its availability to others. For example, national security is a public good. Since people can free-ride on other’s investment in producing public goods, according to standard economic theory no individual would contribute to their production, to the detriment of all.
(620.) On this and more elaborate typologies, see, e.g., Paul A.M. Van Lange, The Pursuit of Joint Outcomes and Equality in Outcomes: An Integrative Model of Social Value Orientation, 77 J. PERSONALITY & SOC. PSYCHOL. 337 (1999); Wing Tung Au & Jessica Y.Y. Kwong, Measurements and Effects of Social-Value Orientation in Social Dilemmas: A Review, in CONTEMPORARY PSYCHOLOGICAL RESEARCH ON SOCIAL DILEMMAS 71 (Ramzi Suleiman et al. eds., 2004).
(621.) Au & Kwong, supra note 620, at 72–74. In a decomposed game, a subject is instructed to allocate a certain pie between self and another (imaginary) person, and the total payoff the subject is expected to receive is the sum of the “self” allocation she chose plus the “other” allocation chosen by the other (imaginary) person.
(622.) Daniel Balliet, Craig Parks & Jeff Joireman, Social Value Orientation and Cooperation: A Meta-analysis, 12 GROUP PROCESSES & INTERGROUP REL. 533 (2009).
(625.) Susan Mohammed & Alexander Schwall, Individual Differences and Decision Making: What We Know and Where We Go from Here, 24 INT’L REV. INDUS. & ORG. PSYCHOL. 249, 249–54 (2009). On the intellectual roots of this deficiency in JDM research and in behavioral economics, see Jeffrey J. Rachlinski, Cognitive Errors, Individual Differences, and Paternalism, 73 U. CHI. L. REV. 207, 209–10 (2006).
(626.) Kirstin C. Appelt et al., The Decision Making Individual Differences Inventory and Guidelines for the Study of Individual Differences in Judgment and Decision-Making Research, 6 JUDGMENT & DECISION MAKING 252 (2011).
(627.) See infra pp. 170–71, 177–85.
(628.) For additional studies of individual differences and their correlates, see supra pp. 75–76, 87, 107–08.
(630.) Keith E. Stanovich & Richard F. West, Individual Differences in Rational Thought, 127 J. EXPERIMENTAL PSYCHOL.: GENERAL 161, 161–64 (1998).
(632.) Keith E. Stanovich & Richard F. West, On the Relative Independence of Thinking Biases and Cognitive Ability, 94 J. PERSONALITY & SOC. PSYCHOL. 672 (2008). On these biases, see generally supra pp. 28–29, 30–31, 34, 46–48, 48–50, 56–57, 58–61, and 79–82, respectively.
(633.) See supra p. 22.
(636.) See, e.g., Irwin P. Levin et al., A New Look at Framing Effects: Distribution of Effect Sizes, Individual Differences, and Independence of Types of Effects, 88 ORG. BEHAV. & HUM. DECISION PROCESSES 411, 427 (2002); Mohammed & Schwall, supra note 625, at 255–59, 280.
(637.) See supra pp. 22–23.
(638.) Shane Frederick, Cognitive Reflection and Decision Making, 19 J. ECON. PERSP. 25 (2005).
(643.) Marco Lauriola & Irwin P. Levin, Personality Traits and Risky Decision-Making in a Controlled Experimental Task: An Exploratory Study, 31 PERSONALITY & INDIVIDUAL DIFFERENCES 215 (2001). For additional findings regarding personality traits and risk attitude, see Marvin Zuckerman & D. Michael Kuhlman, Personality and Risk‐Taking: Common Biosocial Factors, 68 J. PERSONALITY 999 (2000); Marco Lauriola et al., Individual Differences in Risky Decision Making: A Meta‐analysis of Sensation Seeking and Impulsivity with the Balloon Analogue Risk Task, 27 J. BEHAV. DECISION MAKING 20 (2014).
(644.) Henry Moon, The Two Faces of Conscientiousness: Duty and Achievement Striving in Escalation of Commitment Dilemmas, 86 J. APPLIED PSYCHOL. 533 (2001). On escalation of commitment, see supra pp. 56–57.
(647.) Ulrich Schmidt & Stefan Traub, An Experimental Test of Loss Aversion, 25. J. RISK & UNCERTAINTY 233 (2002); Peter Brooks & Horst Zank, Loss Averse Behavior, 31 J. RISK & UNCERTAINTY 301 (2005); Adam S. Booij & Gijs van de Kuilen, A Parameter-Free Analysis of the Utility of Money for the General Population under Prospect Theory, 30 J. ECON. PSYCHOL. 651 (2009).
(648.) Daniel Klapper, Christine Ebling & Jarg Temme, Another Look at Loss Aversion in Brand Choice Data: Can We Characterize the Loss Averse Consumer?, 22 INT’L J. RES. MARKETING 239 (2005).
(650.) Wändi Bruine de Bruin, Andrew M. Parker & Baruch Fischhoff, Explaining Adult Age Differences in Decision-Making Competence, 25 J. BEHAV. DECISION MAKING 352 (2012).
(651.) See, e.g., Wändi Bruine de Bruin, Andrew M. Parker & Baruch Fischhoff, Individual Differences in Adult Decisionmaking Competence, 92 J. PERSONALITY & SOC. PSYCHOL. 938 (2007).
(655.) Daniel Kahneman & Gary Klein, Conditions for Intuitive Expertise: A Failure to Disagree, 64 AM. PSYCHOLOGIST 515 (2009).
(656.) See infra pp. 128–29.
(658.) Erik Dane, Reconsidering the Trade‐Off between Expertise and Flexibility: A Cognitive Entrenchment Perspective, 35 ACAD. MGMT. REV. 579 (2010).
(665.) Barbara J. McNeil et al., On the Elicitation of Preferences for Alternative Therapies, 306 NEW ENGLAND J. MED. 1259 (1982). Similar results were obtained with professional investment managers and financial planners. See Robert A. Olsen, Prospect Theory as an Explanation of Risky Choice by Professional Investors: Some Evidence, 6 REV. FIN. ECON. 225, 228–29 (1997); Michael J. Roszkowski & Glenn E. Snelbecker, Effects of “Framing” on Measures of Risk Tolerance: Financial Planners Are Not Immune, 19 J. BEHAV. ECON. 237 (1990).
(666.) Joshua D. Coval & Tyler Shumway, Do Behavioral Biases Affect Prices?, 60 J. FIN. 1 (2005).
(667.) Zur Shapira & Itzhak Venezia, Patterns of Behavior of Professionally Managed and Independent Investors, 25 J. BANKING & FIN. 1573 (2001).
(668.) Lei Feng & Mark S. Seasholes, Do Investor Sophistication and Trading Experience Eliminate Behavioral Biases in Financial Markets?, 9 REV. FIN. 305 (2005).
(669.) See, e.g., Gregory Gurevich, Doron Kliger & Ori Levy, Decision-Making under Uncertainty—A Field Study of Cumulative Prospect Theory, 33 J. BANKING & FIN. 1221 (2009).
(670.) Ofer H. Azar, Do People Think about Absolute or Relative Price Differences When Choosing between Substitute Goods?, 32 J. ECON. PSYCHOL. 450 (2011); Eyal Zamir & Ilana Ritov, Revisiting the Debate over Attorneys’ Contingent Fees: A Behavioral Analysis, 39 J. LEGAL STUD. 245, 255–59 (2010).
(671.) See Kaye J. Newberry, Philip M.J. Reckers & Robert W. Wyndelts, An Examination of Tax Practitioner Decisions: The Role of Preparer Sanctions and Framing Effects Associated with Client Condition, 14 J. ECON. PSYCHOL. 439 (1993).
(672.) Alain Cohn, Ernst Fehr & Michel André Maréchal, Business Culture and Dishonesty in the Banking System, 516 NATURE 86 (2014).
(673.) On professional decision-making, see supra pp. 114–17.
(674.) See infra pp. 360–61, 393–99, 509–19.
(675.) Brian J. Zikmund-Fisher et al. A Matter of Perspective—Choosing for Others Differs from Choosing for Yourself in Making Treatment Decisions, 21 J. GEN. INTERNAL MED., 618 (2006). See also Peter A. Ubel, Andrea M. Angott & Brian J. Zikmund-Fisher, Physicians Recommend Different Treatments for Patients than They Would Choose for Themselves, 117 JAMA INTERNAL MED. 630 (2011) (the influenza scenario).
(676.) Jingyi Lu & Xiaofei Xie, To Change or Not to Change: A Matter of Decision Maker’s Role, 124 ORG. BEHAV. & HUM. DECISION PROCESSES 47 (2014). On the status-quo and omission biases, see generally supra pp. 48–50.
(677.) Evan Polman, Self–Other Decision Making and Loss Aversion, 119 ORG. BEHAV. & HUM. DECISION PROCESSES 141 (2012); Flavia Mengarelli et al., Economic Decisions for Others: An Exception to Loss Aversion Law, 9 PLOS ONE e85042 (2014). See also Jingyi Lu et al., Missing the Best Opportunity; Who Can Seize the Next One? Agents Show Less Inaction Inertia than Personal Decision Makers, 54 J. ECON. PSYCHOL. 100 (2016) (finding that in deciding for others, people are less vulnerable to the inaction inertia—the phenomenon, associated with loss aversion, “whereby missing a superior opportunity decreases the likelihood of acting on a subsequent opportunity in the same domain”). On loss aversion, see generally supra pp. 42–57.
(678.) James D. Marshall, Jack L. Knetsch, & J.A. Sinden, Agents’ Evaluations and the Disparity in Measures of Economic Loss, 7 J. ECON. BEHAV. & ORG. 115 (1986); Jennifer Arlen & Stephan Tontrup, Does the Endowment Effect Justify Legal Intervention? The Debiasing Effect of Institutions, 44 J. LEGAL STUD. 143 (2015). But see Russell Korobkin, The Status Quo Bias and Contract Default Rules, 83 CORNELL L. REV. 608, 633–47 (1998) (finding an endowment effect for entitlements under default contract rules when subjects—first-year law students—were asked to imagine themselves advising a client about a transaction). On the endowment effect, see generally supra pp. 50–56.
(680.) Eric Shaban, Roshni Guerry & Timothy E. Quill, Reconciling Physician Bias and Recommendations, 117 JAMA INTERNAL MED. 634 (2011). An alternative explanation is that physicians are overoptimistic regarding themselves more than regarding others—hence they underweight the risk of death to a greater extent when deciding for themselves.
(681.) Pavel Atanasov et al., Comparing Physicians Personal Prevention Practices and Their Recommendations to Patients, 37 J. HEALTHCARE QUALITY 189 (2015). On procrastination and self-control, see generally supra pp. 87–93.
(682.) Rosmarie Mendel et al., ‘What Would You Do If You Were Me, Doctor?’: Randomised Trial of Psychiatrists’ Personal v. Professional Perspectives on Treatment Recommendations, 197 BRIT. J. PSYCHIATRY 441 (2010).
(684.) Zikmund-Fisher et al., supra note 675, at 621. The authors found the highest emotional engagement when respondents were asked to decide as parents, next highest when deciding as a physician, and lowest when deciding for themselves—whereas, as described in the text above, the omission bias was greatest when people decided for themselves.
(685.) Jingyi Lu, Xiaofei Xie & Jingzhe Xu, Desirability or Feasibility: Self–Other Decision-Making Differences, 39 PERSONALITY & SOC. PSYCHOL. BULL. 144 (2013).
(686.) See generally Trope & Liberman, supra note 503; Fujita, Trope & Liberman, supra note 503. See also Rachel Barkan, Shai Danziger & Yaniv Shani, Do as I Say, Not as I Do: Choice–Advice Differences in Decisions to Learn Information, 125 J. ECON. BEHAV. & ORG. 57 (2016).
(689.) Eva Jonas, Stefan Schulz-Hardt & Dieter Frey, Giving Advice or Making Decisions in Someone Else’s Place: The Influence of Impression, Defense, and Accuracy Motivation on the Search for New Information, 31 PERSP. SOC. PSYCHOL. BULL. 977 (2005).
(690.) These observations refer to advice giving. On advice taking, see infra pp. 123–24.
(691.) See, e.g., Christoph Engel, The Behaviour of Corporate Actors: How Much Can We Learn from the Experimental Literature?, 6 J. INSTITUTIONAL ECON. 445 (2010).
(692.) See, e.g., Steven R. Elliot & Michael McKee, Collective Risk Decision in the Presence of Many Risks, 48 KYKLOS 541 (1995).
(694.) On simple aggregation of judgments or preferences and aggregation with minimal information exchange, see generally R. Scott Tindale & Katherina Kluwe, Decision Making in Groups and Organizations, in 2 WILEY BLACKWELL HANDBOOK, supra note 2, at 849, 851–54. On group decision-making in specific legal contexts, see infra pp. 365–66, 369, 372, 394, 416, 424, 559–61.
(695.) PATRICK R. LAUGHLIN, GROUP PROBLEM SOLVING 22–44, 57–108 (2011).
(696.) See, e.g., GROUP CREATIVITY: INNOVATION THROUGH COLLABORATION (Paul B. Paulus & Bernard A. Nijstad eds., 2003); Norbert L. Kerr & R. Scott Tindale, Group‐Based Forecasting: A Social Psychological Analysis, 27 INT’L J. FORECASTING 14 (2011).
(697.) Norbert L. Kerr, Robert J. MacCoun & Geoffrey P. Kramer, Bias in Judgment: Comparing Individuals and Groups, 103 PSYCHOL. REV. 687 (1996).
(700.) Patrick R. Laughlin & Alan L. Ellis, Demonstrability and Social Combination Processes on Mathematical Intellective Tasks, 22 J. EXPERIMENTAL SOC. PSYCHOL. 177 (1986).
(701.) See, e.g., Garold Stasser & William Titus, Pooling of Unshared Information in Group Decision Making: Biased Information Sampling during Discussion, 48 J. PERSONALITY & SOC. PSYCHOL. 1467 (1985); Daniel Gigone & Reid Hastie, The Common Knowledge Effect: Information Sharing and Group Judgment, 65 J. PERSONALITY & SOC. PSYCHOL. 959 (1993).
(702.) Felix C. Brodbeck et al., Group Decision Making under Conditions of Distributed Knowledge: The Information Asymmetries Model, 32 ACAD. MGMT. J. 459 (2007); Tindale & Kluwe, supra note 694, at 859–62, 864–66.
(703.) David. J. Myers & Helmut Lamm, The Group Polarization Phenomenon, 83 PSYCHOL. BULL.602 (1976); Daniel J. Isenberg, Group Polarization: A Critical Review and Meta-analysis, 50 J. PERSONALITY & SOC. PSYCHOL. 1141 (1986); Cass R. Sunstein, Deliberative Trouble? Why Groups Go to the Extreme, 110 YALE L.J. 71 (2000).
(704.) See supra pp. 22–23.
(705.) R. Scott Tindale, Decision Errors Made by Individuals and Groups, in INDIVIDUAL AND GROUP DECISION MAKING: CURRENT ISSUES 109 (N. John Castellan, Jr., ed., 1993). On the conjunction fallacy and base-rate neglect, see generally supra pp. 28–31.
(706.) Timothy W. McGuire, Sara Kiesler & Jane Siegel, Group and Computer-Mediated Discussion Effects in Risk Decision Making, 52 J. PERSONALITY & SOC. PSYCHOL. 917 (1987); Paul W. Paese, Mary Bieser & Mark E. Tubbs, Framing Effects and Choice Shifts in Group Decision Making, 56 ORG. BEHAV. & HUM. DECISION PROCESSES 149 (1993); Whyte, supra note 253.
(707.) Tatsuya Kameda & James H. Davis, The Function of the Reference Point in Individual and Group Risk Decision Making, 46 ORG. BEHAV. & HUM. DECISION PROCESSES 55 (1990); R. Scott Tindale, Susan Sheffey & Leslie A. Scott, Framing and Group Decision-Making: Do Cognitive Changes Parallel Preference Changes?, 55 ORG. BEHAV. & HUM. DECISION PROCESSES 470 (1993). Tindale and his coauthors found that the group’s decision is usually in line with the majority’s framing, without necessarily changing the minority’s framing.
(708.) Jeremy A. Blumenthal, Group Deliberation and the Endowment Effect: An Experimental Study, 50 HOUS. L. REV. 41 (2012); Amira Galin, Endowment Effect in Negotiations: Group versus Individual Decision-Making, 75 THEORY & DECISION 389 (2013).
(709.) Glen Whyte, Escalating Commitment in Individual and Group Decision Making: A Prospect Theory Approach, 54 ORG. BEHAV. & HUM. DECISION PROCESSES 430 (1993). See also Max Bazerman, Toni Giuliano & Alan Appelman, Escalation of Commitment in Individual and Group Decision Making, 33 ORG. BEHAV. & HUM. PERFORMANCE 141 (1984).
(711.) See generally Silvia Bonaccio & Reeshad S. Dalal, Advice Taking and Decision Making: An Integrative Literature Review and Implications for the Organizational Sciences, 101 ORG. BEHAV. & HUM. DECISION PROCESSES 127 (2006). On advice giving, see supra p. 120.
(712.) Nigel Harvey & Ilan Fischer, Taking Advice: Accepting Help, Improving Judgment, and Sharing Responsibility, 70 ORG. BEHAV. & HUM. DECISION PROCESSES 117 (1997); Ilan Yaniv, Receiving Other People’s Advice: Influence and Benefit, 93 BEHAV. & HUM. DECISION PROCESSES 1 (2004).
(717.) See, e.g., Nigel Harvey & Clare Harries, Effects of Judges’ Forecasting on Their Later Combination of Forecasts for the Same Outcomes, 20 INT’L J. FORECASTING 391 (2004); Joachim I. Krueger, Return of the Ego—Self-Referent Information as a Filter for Social Prediction: Comment on Karniol (2003), 110 PSYCHOL. REV. 585 (2003). On egocentrism, see generally supra pp. 58–76.
(718.) Francesca Gino, Do We Listen to Advice Just Because We Paid for It? The Impact of Advice Cost on Its Use, 107 ORG. BEHAV. & HUM. DECISION PROCESSES 234 (2008). On sunk costs, see generally supra pp. 56–57.
(719.) Paul C. Price & Eric R. Stone, Intuitive Evaluation of Likelihood Judgment Producers: Evidence for a Confidence Heuristic, 17 J. BEHAV. DECISION MAKING 39 (2004); Bonaccio & Dalal, supra note 711, at 132–33. See also infra p. 572.
(721.) See generally Krishna Savani et al., Culture and Judgment and Decision Making, in WILEY BLACKWELL HANDBOOK, supra note 2, at 456. On cross-cultural differences in experimental game theory, see, e.g., Oosterbeek, Sloof & van De Kuilen, supra note 569.
(722.) See, e.g., Harry C. Triandis, The Self and Social Behavior in Differing Cultural Contexts, 96 PSYCHOL. REV. 506 (1989).
(723.) Hazel R. Markus & Shinobu Kitayama, Culture and the Self: Implications for Cognition, Emotion, and Motivation, 98 PSYCHOL. REV. 224 (1991).
(724.) RICHARD E. NISBETT, THE GEOGRAPHY OF THOUGHT: HOW ASIANS AND WESTERNERS THINK DIFFERENTLY . . . AND WHY (2003).
(725.) Elke U. Weber & Christopher Hsee, Cross-Cultural Differences in Risk Perception, But Cross-Cultural Similarities in Attitudes towards Perceived Risk, 44 MGMT. SCI. 1205 (1998); Christopher Hsee & Elke U. Weber, Cross‐National Differences in Risk Preference and Lay Predictions, 12 J. BEHAV. DECISION MAKING 165 (1999).
(726.) See generally supra pp. 88–93.
(727.) On priming techniques, see supra pp. 78–79.
(728.) Haipeng (Allan) Chen, Sharon Ng & Akshay R. Rao, Cultural Differences in Consumer Impatience, 42 J. MARKETING RES. 291 (2005).
(729.) Daniel J. Benjamin, James J. Choi & A. Joshua Strickland, Social Identity and Preferences, 100 AM. ECON. REV. 1913 (2010).
(730.) On these phenomena, see pp. 61–64, 64–66, and 68–69, respectively.
(731.) Steven J. Heine & Darrin R. Lehman, Cultural Variation in Unrealistic Optimism: Does the West Feel More Vulnerable than the East?, 68 J. PERSONALITY & SOC. PSYCHOL. 595 (1995). See also Savani et al., supra note 721, at 468–69.
(732.) Steven J. Heine & Takeshi Hamamura, In Search of East Asian Self-Enhancement, 11 PERSONALITY & SOC. PSYCHOL. REV. 4 (2007). See also Amy H. Mezulis et al., Is There a Universal Positivity Bias in Attributions? A Meta-analytic Review of Individual, Developmental, and Cultural Differences in the Self-Serving Attributional Bias, 130 PSYCHOL. BULL. 711, 714–15, 729–32 (2004).
(733.) William W. Maddux et al., For Whom Is Parting with Possessions More Painful? Cultural Differences in the Endowment Effect, 21 J. ASS’N PSYCHOL. SCI. 1910 (2010). On the endowment effect, see supra pp. 50–56.
(734.) Two other studies have found cross-cultural differences with regard to phenomena associated with prospect theory, namely reference point adaptation and the escalation of commitment (on these phenomena, see supra pp. XX and XX, respectively). See Hal R. Arkes et al., A Cross-Cultural Study of Reference Point Adaptation: Evidence from China, Korea and the US, 112 ORG. BEHAV. & HUM. DECISION PROCESSES 99 (2010); David J. Sharp & Stephen B. Salter, Project Escalation and Sunk Costs: A Test of the International Generalizability of Agency and Prospect Theories, 28 J. INT’L BUS. STUD. 101 (1997).
(735.) For a review of these studies, see J. Frank Yates, Culture and Probability Judgment, 4 SOC. & PERSONALITY PSYCHOL. COMPASS 174 (2010).
(738.) See infra pp. 171–77, 314–18.
(740.) See also infra p. 185.
(741.) See supra pp. 114–17, 117–20, and 120–24, respectively.
(742.) See also infra pp. 157–86.
(744.) On policy and pragmatic issues associated with the adoption of debiasing techniques, see generally Larrick, supra note 743, at 331–34; Jack B. Soll, Katherine L. Milkman & John W. Payne, A User’s Guide to Debiasing, in 2 WILEY BLACKWELL HANDBOOK, supra note 2, at 924, 940–44.
(747.) MICHAEL LEWIS, MONEYBALL: THE ART OF WINNING AN UNFAIR GAME (2003).
(750.) See, e.g., Robyn M. Dawes, David Faust & Paul E. Meehl, Clinical versus Actuarial Judgment, 243 SCI. 1668 (1989).
(754.) Hal R. Arkes, Victoria A. Shaffer & Mitchell A. Medow, Patients Derogate Physicians Who Use a Computer‐Assisted Diagnostic Aid, 27 MED. DECISION MAKING 189 (2007).
(755.) See, e.g., Gregory Mitchell, Why Law and Economics’ Perfect Rationality Should Not Be Traded for Behavioral Law and Economics’ Equal Incompetence, 91 GEO. L.J. 67, 114–19 (2002); Eldar Shafir & Robyn A. LeBoeuf, Rationality, 53 ANN. REV. PSYCHOL. 491, 501–02 (2002).
(756.) Kevin G. Volpp et al., Financial Incentive‐Based Approaches for Weight Loss: A Randomized Trial, 300 J. AM. MED. ASS’N. 2631 (2008).
(757.) Kevin G. Volpp et al., A Randomized, Controlled Trial of Financial Incentives for Smoking Cessation, 360 NEW ENGLAND J. MED. 699 (2009).
(758.) Gary Charness & Uri Gneezy, Incentives to Exercise, 77 ECONOMETRICA 909 (2009); Dan Acland & Matthew R. Levy, Naivité, Projection Bias, and Habit Formation in Gym Attendance, 61 MGMT. SCI. 146 (2015).
(759.) See, e.g., Kevin G. Volpp et al., A Randomized Controlled Trial of Financial Incentives for Smoking Cessation, 15 CANCER EPIDEMIOLOGY, BIOMARKERS & PREVENTION 12 (2006) (finding that incentives increased quit rates after seventy-five days, but not after six months); Mitesh S. Patel et al., Premium-Based Financial Incentives Did Not Promote Workplace Weight Loss in a 2013–15 Study, 35 HEALTH AFF. 71 (2016).
(761.) See, e.g., Dan A. Stone & David A. Ziebart, A Model of Financial Incentive Effects in Decision Making, 61 ORG. BEHAV. & HUM. DECISION PROCESSES 250 (1995).
(762.) Colin F. Camerer & Robin M. Hogarth, The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework, 19 J. RISK & UNCERTAINTY 7, 19–21 (1999).
(764.) Tversky & Kahneman, supra note 168, at 455 (a similar pattern of the framing effect was found with and without real payoffs); Wolfgang Hell et al., Hindsight Bias: An Interaction of Automatic and Motivational Factors?, 16 MEMORY & COGNITION 533 (1988) (real payoffs did not have a statistically significant effect on the hindsight bias per se, but did interact with other variables manipulated in the experiment).
(765.) See, e.g., Gretchen B. Chapman & Eric J. Johnson, Incorporating the Irrelevant: Anchors in Judgments of Belief and Value, in HEURISTICS AND BIASES, supra note 14, at 120; Nicholas Epley & Thomas Gilovich, When Effortful Thinking Influences Judgmental Anchoring: Differential Effects of Forewarning and Incentives on Self-Generated and Externally Provided Anchors, 18 J. BEHAV. DECISION MAKING 199 (2005).
(766.) Baruch Fischhoff, Paul Slovic & Sarah Lichtenstein, Knowing with Certainty: The Appropriateness of Extreme Confidence, 3 4 J. EXPERIMENTAL PSYCHOL.: HUM. PERCEPTION & PERFORMANCE 552 (1977).
(767.) Paul W. Paese & Janet A. Sniezek, Influences on the Appropriateness of Confidence in Judgment: Practice, Effort, Information, and Decision-Making, 48 ORG. BEHAV. & HUM. DECISION PROCESSES 100 (1991).
(768.) David Grether & Charles Plott, Economic Theory of Choice and the Preference Reversal Phenomenon, 69 AM. ECON. REV. 623, 632 (1979).
(770.) Uri Gneezy, Stephan Meier & Pedro Rey-Biel, When and Why Incentives (Don’t) Work to Modify Behavior, 25 J. ECON. PERSP. 191, 192–94 (2011).
(771.) Dan Ariely et al., Large Stakes and Big Mistakes, 76 REV. ECON. STUD. 451 (2009).
(773.) Ann E. Tenbrunsel & David M. Messick, Sanctioning Systems, Decision Frames, and Cooperation, 44 ADMIN. SCI. Q. 684 (1999).
(774.) Roland G. Fryer, Jr. et al, Enhancing the Efficacy of Teacher Incentives through Loss Aversion: A Field Experiment (working paper, July 2012, available at http://www.nber.org/papers/w18237). See also Tanjim Hossain & John A. List, The Behavioralist Visits the Factory: Increased Productivity Using Simple Framing Manipulations, 58 MGMT. SCI. 2151 (2012); Fuhai Hong, Tanjim Hossain & John A. List, Framing Manipulations in Contests: A Natural Field Experiment, 118 J. ECON. BEHAV. & ORG. 372 (2015).
(775.) For a thorough review, see Jennifer S. Lerner & Philip E. Tetlock, Accounting for the Effects of Accountability, 125 PSYCHOL. BULL. 255 (1999).
(776.) Jennifer S. Lerner & Philip E. Tetlock, Bridging Individual, Interpersonal, and Institutional Approaches to Judgment and Choice: The Impact of Accountability on Cognitive Bias, in EMERGING PERSPECTIVES ON JUDGMENT AND DECISION RESEARCH 431, 433–34 (Sandra L. Schneider & James Shanteau eds., 2003).
(778.) Philip E. Tetlock & Richard Boettger, Accountability Amplifies the Status Quo Effect when Change Creates Victims, 7 J. BEHAV. DECISION MAKING 1 (1994).
(780.) Itamar Simonson & Barry M. Staw, Deescalation Strategies: A Comparison of Techniques for Reducing Commitment to Losing Courses of Action, 77 J. APPLIED PSYCHOL. 419 (1992); Karen Siegel-Jacobs & J. Frank Yates, Effects of Procedural and Outcome Accountability on Judgment Quality, 65 ORG. BEHAV. & HUM. DECISION PROCESSES 1 (1996). These characteristics are not universal, however. See Bart de Langhe, Stijn M.J. van Osselaer & Berend Wierenga, The Effects of Process and Outcome Accountability on Judgment Process and Performance, 115 ORG. BEHAV. & HUM. DECISION PROCESSES 238 (2011).
(786.) Itamar Simonson & Peter Nye, The Effect of Accountability on Susceptibility to Decision Errors, 51 ORG. BEHAV. & HUM. DECISION PROCESSES 416, 435–37 (1992). On these biases, see supra pp. 30–31 and 32–34, respectively.
(787.) Lerner & Tetlock, supra note 775, at 264. It should be noted, however, that the findings that accountability amplifies the compromise and attraction effects (Simonson, supra note 455) have been qualified in subsequent studies (Itamar Simonson & Stephen M. Nowlis, The Role of Explanations and Need for Uniqueness in Consumer Decision Making: Unconventional Choices Based on Reasons, 27 J. CONSUMER RES. 49 (2000)).
(790.) See infra pp. 177–85.
(791.) This effect denotes people’s tendency, when provided with the correct answers to questions, to misremember how much they knew those answers, as well as to overstate how much they would have known them, in response to a hypothetical question. See also supra pp. 38–39.
(792.) Baruch Fischhoff, Perceived Informativeness of Facts, 3 J. EXPERIMENTAL PSYCHOL.: HUM. PERCEPTION & PERFORMANCE 349 (1977).
(793.) See Kamin & Rachlinskil, supra note 118 (failed debiasing); Merrie Jo Stallardl & Debra L. Worthington, Reducing the Hindsight Bias Utilizing Attorney Closing Arguments, 22 LAW & HUM. BEHAV. 671 (1998) (effective debiasing).
(794.) Joey F. George, Kevin Duffy & Manju Ahuja, Countering the Anchoring and Adjustment Bias with Decision Support Systems, 29 DECISION SUPPORT SYS. 195 (2000). On anchoring, see generally supra pp. 79–82.
(795.) Fei-Fei Cheng & Chin-Shan Wu, Debiasing the Framing Effect: The Effect of Warning and Involvement, 49 DECISION SUPPORT SYS. 328 (2010). On framing effects, see supra pp. 46–48.
(796.) See Charles G. Lord, Mark R. Lepper & Elizabeth Preston, Considering the Opposite: A Corrective Strategy for Social Judgment, 47 J. PERSONALITY & SOC. PSYCHOL. 1231 (1984).
(798.) Shane Frederick et al., Opportunity Cost Neglect, 36 J. CONSUMER RES. 553 (2009).
(799.) Thomas Mussweiler, Fritz Strack & Tim Pfeiffer, Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility, 26 PERSONALITY & SOC. PSYCHOL. BULL. 1142 (2000).
(801.) Linda Babcock, George Loewenstein & Samuel Issacharoff, Creating Convergence: Debiasing Biased Litigants, 22 LAW & SOC. INQUIRY 913 (1997).
(802.) Edward R. Hirt & Keith D. Markman, Multiple Explanation: A Consider-an-Alternative Strategy for Debiasing Judgments, 69 J. PERSONALITY & SOC. PSYCHOL. 1069 (1995).
(806.) Geoffrey T. Fong, David H. Krantz & Richard E. Nisbett, The Effects of Statistical Training on Thinking about Everyday Problems, 18 COGNITIVE PSYCHOL. 253 (1986).
(807.) Richard P. Larrick, James N. Morgan & Richard E. Nisbett, Teaching the Use of Cost-Benefit Reasoning in Everyday Life, 1 PSYCHOL. SCI. 362 (1990).
(808.) See, e.g., Geoffrey T. Fong & Richard E. Nisbett, Immediate and Delayed Transfer of Training Effects in Statistical Reasoning, 120 J. EXPERIMENTAL PSYCHOL.: GENERAL 34 (1991) (finding that after two weeks there was a significant decline in performance in the untrained domain, though performance was still better than for untrained subjects). See generally RULES OF REASONING, supra note 805; Soll, Milkman & Payne, supra note 744, at 930–31; Rachlinski, supra note 625, at 219–21.
(810.) Peter Sedlmeier & Gerd Gigerenzer, Teaching Bayesian Reasoning in Less than Two Hours, 130 J. EXPERIMENTAL PSYCHOL.: GENERAL 380 (2001).
(812.) Sammy Almashat et al., Framing Effect Debiasing in Medical Decision Making, 71 PATIENT EDUC. & COUNSELING 102 (2008).
(813.) Craig Emby & David Finley, Debiasing Framing Effects in Auditors’ Internal Control Judgments and Testing Decisions, 14(2) CONTEMP. ACCOUNTING RES. 55 (1997).
(814.) Arkes & Blumer, supra note 243, at 136. Similarly, no statistically significant difference was found between the WTP-WTA gap of economics students and students of other fields, regarding Christmas presents. See Thomas K. Bauer & Christoph M. Schmidt, WTP vs. WTA: Christmas Presents and the Endowment Effect, in 232 JAHRBÜCHER FÜR NATIONALÖKONOMIE UND STATISTIK 4 (2012).
(815.) Richard P. Larrick, Richard E. Nisbett & James N. Morgan, Who Uses the Normative Rules of Choice? Implications for the Normative Status of Microeconomic Theory, 56 ORG. BEHAV. & HUM. DECISION PROCESSES 331 (1993).
(816.) Itamar Simonson & Barry M. Staw, Deescalation Strategies: A Comparison of Techniques for Reducing Commitment to Losing Courses of Action, 77 J. APPLIED PSYCHOL. 419 (1992); Itamar Simonson & Peter Nye, The Effect of Accountability on Susceptibility to Decision Errors, 51 ORG. BEHAV. & HUM. DECISION PROCESSES 416 (1992).
(817.) See, e.g., Ward Farnsworth, The Legal Regulation of Self-Serving Bias, 37 U.C. DAVIS L. REV. 567, 581–83 (2003) (questioning the external validity of the findings of Babcock, Loewenstein & Issacharoff, supra note 801).
(818.) See infra pp. 177–85.