Law Review at Washington University in St. LouisLaw Review at Washington University in St. LouisLaw Review at Washington University in St. LouisLaw Review at Washington University in St. Louis
  • Home
  • Print
  • Online
  • Submissions 
    • Articles
    • Commentaries
    • Notes
  • Symposia 
    • Past Symposia
  • About 
    • Masthead
    • Membership
  • Contact 
    • Subscriptions

Combating Arbitrary Jurisprudence by Addressing Anchoring Bias

  1. Combating Arbitrary Jurisprudence by Addressing Anchoring Bias
Essay

Combating Arbitrary Jurisprudence by Addressing Anchoring Bias

By Michael Conklin

PDF Copy

Abstract

Anchoring is a powerful behavioral bias that significantly affects decisions. Numerous studies have shown that juries and even judges are susceptible to its effects. This article contributes to the existing research by measuring how even subtle, irrelevant anchors in the legal system affect decisions, thus filling a gap in the literature in this area. Additionally, the article proposes potential solutions to combat anchoring effects, with the goal of providing more equality in legal outcomes.

Introduction

Anchoring is a behavioral bias whereby information a person is previously exposed to disproportionately affects a future decision. Anchoring does this by changing the point of reference that people use when making judgments. Once an anchor is set, future judgments are not based on a neutral examination of the evidence; rather, they are made in relation to the anchor.[1] Numerous studies have documented this phenomenon in juries and judges.[2] However, there is currently a gap in the literature as to whether subtle, irrelevant anchors also affect legal decisions.[3] This article discusses the results of a study designed to measure this effect and therefore fills the previous gap in the literature. Additionally, the article proposes potential solutions to combat anchoring effects, with the goal of providing more equality in legal outcomes.

I. Existing research

The decisions that juries are called upon to make are often complex and nonstandard. For this reason, members of juries often rely on cognitive heuristics,[4] which frequently result in systematic biases, such as anchoring.[5] Numerous studies show that mock jury awards are highly influenced by anchoring.

One mock jury study varied the damages requested by the plaintiff’s lawyer while all other factors were held constant.[6] The requests were $10,000, $75,000, and $150,000.[7] Based on these requests, mock jurors awarded corresponding mean awards of $18,000, $62,800, and $101,400 respectively.[8] A later 1996 study confirmed this same anchoring effect on personal injury awards.[9]

Unlike juries, judges routinely make legal decisions. However, this familiarity does not render them immune from anchoring effects.[10] For example, studies show criminal law judges are heavily influenced by probation officers’ sentencing recommendations.[11] One study administered to German judges found a drastic difference in sentencing outcomes depending on whether the hypothetical prosecutor suggested a two-month sentence or a thirty-four-month sentence.[12] The former resulted in an 18.78 month average while the latter resulted in a 28.70 month average.[13]

In all available published research on judge and jury anchoring bias, the anchor was blatant and related to the final determination. In the previous study, for example, the prosecutor’s suggested sentence was generally related to the facts of the case. A prosecutor is unlikely to suggest two months in a first-degree murder case and similarly unlikely to suggest thirty-four months in a petty theft case.

However, research into nonlegal decisions suggests that anchoring bias is present even when the anchor is completely irrelevant to the final determination. One study had participants spin a wheel numbered zero to one hundred, which was rigged to land on either ten or sixty-five.[14] Participants were then asked if the percentage of African countries in the United Nations was greater than or less than that number.[15] Finally, they were asked to provide a percentage estimate as to how many African countries are in the United Nations.[16] Even though the participants knew the number from the wheel was irrelevant to the actual percentage of African countries in the United Nations, those who spun sixty-five gave a median estimate of 45% while those who spun ten gave a median estimate of 25%.[17]

In another non-law related study regarding subtle, irrelevant anchors, two groups were provided a description of a restaurant and asked how much they would be willing to pay to eat there.[18] The only independent variable was that one group was told the name of the restaurant was “Studio 17” and the other “Studio 97.”[19] The former group was willing to pay an average of $24.58, and the latter group $32.84.[20]

Furthermore, non-law related studies show that irrelevant anchors are effective even when they are blatantly absurd. Asking students if they thought a textbook would cost more or less than $7,128.53, followed by asking them how much the textbook would cost, caused higher estimates to be given.[21] Likewise, asking people if the temperature in San Francisco was more or less than 558 degrees, followed by asking them what they estimated the temperature to be, resulted in higher estimates.[22]

II. Study and results

The research conducted for this article is the first to measure whether these subtle, irrelevant anchors also affect legal decision making. In the study, 101 undergraduate and graduate students at a regional university in the United States were surveyed. After excluding four unusable submissions,[23] the remaining ninety-seven were analyzed. The survey consisted of having participants read one of two different versions of a criminal case summary. The only difference between the two summaries was that one contained low numbers and the other high numbers. In the low version, the defendant was on First Street on March 2nd and was apprehended three minutes after the incident. In the high version, the defendant was on Eighty-First Street on March 31st and was apprehended forty-five minutes later.

The difference in time of the apprehension between the two scenarios was irrelevant. In the scenarios, the defendant produced the stolen item, provided information only the assailant would know, and confessed to the crime. Therefore, the three variables between the two studies—date, street, and time to apprehension—are subtle and irrelevant as to what sentence the defendant deserves.

After reading the case summary, participants were asked, “If you had complete discretion to sentence [defendant], how many months would you give him?” The results demonstrated a statistically significant increase in the average sentence from the low group to the high group.[24] The average sentence for the low group was 7.4 months, while the average sentence for the high group was 9.7 months. This result strongly suggests that even subtle, irrelevant anchors can have a significant effect on legal decision making.

A jury trial in the U.S. legal system is generally divided into two phases, the guilt-finding phase and the sentencing phase. While the jury provides the result for the guilt-finding phase, the judge generally provides the result for the sentencing phase. Therefore, a potential criticism of this study could be that the research subjects were not actual judges and that judges would likely not succumb to the anchoring bias. However, previous research illustrates that judges are not significantly better than laypeople at avoiding anchoring bias.[25] This is consistent with existing anchoring research that found other subject-matter experts, such as doctors, real estate appraisers, engineers, accountants, options traders, military leaders, and psychologists, were also susceptible to cognitive biases in their areas of expertise.[26]

III. Future research

As previously illustrated, multiple studies have shown that juries in civil cases are heavily swayed by blatant, relevant anchoring biases. Future research should be conducted into whether subtle and irrelevant anchors affect not only a judge’s criminal sentencing decisions but also jury awards in civil cases. Anchoring in this context would be more difficult to measure because the end results of civil jury awards are generally in the thousands to millions of dollars; variations such as First Street versus Eighty-First Street are unlikely to have any effect because one and eighty-one are indistinguishably low when compared to the amount of most civil awards. Future research into the potential effects of subtle, irrelevant anchoring bias on civil jury awards would therefore have to implement creative case summaries with numbers large enough to be similar to likely jury awards. An example would be adding a line in one summary which explains, “In a town of over 100,000 people, Mr. Smith was singled out,” and another in the second case summary explaining, “In a town of over 900,000 people, Mr. Smith was singled out.” If the case summary normally returned awards of $500,000, adding this variable would enable researchers to measure if it caused mock jurors in the first, low group to deviate down from $500,000 and mock jurors in the second, high group to deviate up, which is what anchoring research suggests would occur.

IV. Proposed solutions

The anchoring effect is so resilient that “it has proved to be almost impossible to reduce.”[27] However, there are two studies that suggest possible solutions. One found that the magnitude of the anchoring effect can be limited through the implementation of a procedural priming task.[28] The study had participants in one group find similarities between two images while participants in another group found differences.[29] Both groups were then given an anchoring test.[30] While both groups succumbed to the anchoring bias, participants who were given the procedural priming task of finding differences fared better than their counterparts who looked for similarities.[31] Additional studies showed that implementing a “consider-the-opposite” strategy, in which one actively generates reasons why the anchor is inappropriate, also minimized anchoring bias.[32]

These two studies suggest that if judges were to put themselves in a more critical mindset—possibly by listing reasons why an attorney’s sentencing suggestions are unreasonable— it would help them reach a more neutral decision, unbiased by extreme requests. Another option for reducing the anchoring effect on judges’ decisions would be to have them write down a preliminary sentencing decision before hearing suggestions from attorneys. This way they would be anchored more to this preliminary, neutral judgment than to the extreme requests of the attorneys.

A more drastic solution would be to forbid attorneys to make specific sentencing suggestions. For example, prosecutors would only be able to argue for a “high” or “significant” sentence based on the relevant factors instead of a specific term of incarceration. Under such a system, judges would focus on the underlying reasons for the attorney’s claim rather than be anchored to an arbitrarily high or low number.

The utilization by judges of a searchable database of cases and sentencing outcomes would also likely be beneficial. Such a database would allow judges to find the average sentence for similar cases and then deviate up or down from that fixed point based on the unique circumstances of their case. This would similarly result in judges being anchored to a neutral reference point.

Finally, increased judicial specialization may also help combat the anchoring problem because it would, in effect, give judges a more robust experience of similar cases. This would cause judges to be less susceptible to extreme positions proposed by trial advocates because they would be well grounded in the outcomes of similar cases. In this way, it would provide similar benefits to the previously-mentioned database proposal.

Any suggestion that judges should simply be instructed to disregard specific numerical requests from attorneys and instead render decisions based solely on the facts of the case is unlikely to be successful. While there are no studies that measure the ability of people to use willpower to disregard an anchor, the existing literature strongly suggests this is not possible. Even when people know the anchor is irrelevant, such as the result of a randomly spun wheel, or absurd, such as a $7,128.53 textbook, they are nevertheless still unable to disregard the anchor and make an unbiased estimate. Therefore, it is unreasonable to assume that a judge would be able to use their willpower to negate anchors they face.

Conclusion

Trial outcomes are inevitably the result of many subjective factors. For instance, trial outcomes are affected by the biases that jurors and judges inevitably bring to the process. This study provides strong evidence of another factor that affects legal outcomes, anchoring bias. While the anchoring bias effect on trial outcomes has been well documented, this study is the first to examine whether subtle, irrelevant anchors also affect trial outcomes. The affirmative finding helps show how powerful anchoring bias is. Consequently, combating the significant effects of anchoring is of upmost importance in order to guard against unequal treatment in the legal system.


[1].            Amos Tversky & Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, 185 Science 1124, 1128 (1974).

[2].            See infra notes 5–13 and accompanying text.

[3].            A subtle, irrelevant anchor is one that is neither obvious nor related to the upcoming decision. For example, in a decision involving how much to offer for a used chair, discussing what the temperature is or how many points a basketball team scored would be subtle, irrelevant anchors. Discussing whether the chair was brand new or how much a similar one sold for would also be anchors, but not subtle and irrelevant ones because they are related to the value of the used chair.

[4].            A cognitive heuristic is a mental shortcut that allows one to make a decision without the time-consuming task of finding the optimal solution. Michael J. Seitz, Nikolai W. F. Bode & Gerta Köster, How Cognitive Heuristics can Explain Social Interactions in Spatial Movement, 13 J. Royal Soc’y Interface 1 (2016). Using a cognitive heuristic is a common practice when one is faced with a complex cognitive task, but the practice often leads to systematic biases such as anchoring.

[5].            Gretchen B. Chapman & Brian H. Bornstein, The More You Ask for, the More You Get: Anchoring in Personal Injury Verdicts, 10 Applied Cognitive Psychol. 519, 521 (1996).

[6].            Allan Raitz, Edith Greene, Jane Goodman & Elizabeth F. Loftus, Determining Damages: The Influence of Expert Testimony on Jurors’ Decision Making, 14 L. & Hum. Behav. 385, 387 (1990) (citing J.J. Zuehl, The Ad Damnum, Jury Instructions, and Personal Injury Damage Awards (1982) (unpublished manuscript)).

[7].            Id.

[8].            Id.

[9].            Chapman & Bornstein, supra note 5, at 526.

[10].          Id. at 521.

[11].          See, e.g., id.

[12].          Birte Englich & Thomas Mussweiler, Sentencing Under Uncertainty: Anchoring Effects in the Courtroom, 31 J. Applied Soc. Psychol. 1535, 1538–40 (2001).

[13].          Id. at 1540.

[14].          Tversky & Kahneman, supra note 1, at 1128.

[15].          Id.

[16].          Id.

[17].          Id.

[18].          Clayton R. Critcher & Thomas Gilovich, Incidental Environmental Anchors, 21 J. Behav. Decision Making 241, 246–47 (2008).

[19].          Id.

[20].          Id. at 247.

[21].          Chris Guthrie, Jeffrey J. Rachlinski & Andrew J. Wistrich, Inside the Judicial Mind, 86 Cornell L. Rev. 777, 788 (2001) (citing Scott Plous, The Psychology of Judgment and Decision Making 146 (1993)).

[22].          Id. at 788–89 (citing Plous, supra note 21, at 146).

[23].          Two were excluded for giving multiple answers, one was excluded for being illegible, and one was excluded for demonstrating a clear misunderstanding of the question (the subject put “365” for how many months he/she would sentence the defendant).

[24].          A Wilcoxon Rank-Sum test with continuity correction was performed. The results confirm that the null hypothesis of the two survey questions’ distributions being equal can be rejected based on the standard 5% significance level. In other words, the study proves beyond a 95% certainty that the disparate results between the two surveys is due to the different wording of the case summaries and not due to a random occurrence. The p-value of the Wilcoxon Rank-Sum test was 0.02003.

[25].          A study of 167 federal magistrate judges concluded that judges are “just as susceptible as other decision makers” when it comes to the anchoring bias. Guthrie et al., supra note 21, at 816. “Most studies that have compared the fact-finding and decision-making of judges to that of jurors have found no differences. . . . [including studies examining judges’ ability to] avoid taking cognitive shortcuts (heuristics) that lead to errors . . . .” Dawn McQuiston-Surrett & Michael J. Saks, The Testimony of Forensic Identification Science: What Expert Witnesses Say and What Factfinders Hear, 33 L. & Hum. Behav. 436, 440 (2009).

[26].          Guthrie et al., supra note 21, at 782–83.

[27].          Thomas Mussweiler, The Malleability of Anchoring Effects, 49 Experimental Psychol. 67, 71 (2002).

[28].          Id. at 69–70.

[29].          Id.

[30].          Id.

[31].          Id.

[32].          Thomas Mussweiler, Fritz Strack & Tim Pfeiffer, Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility, 26 Personality & Soc. Psychol. Bull. 1142 (2000).

06/16/2019 01:02 pm CDT
WashU Law Review Logo Rev 1c

Campus Box 1120, One Brookings Drive, St. Louis, MO 63130

Editor ResourcesBylawsTerms of UsePrivacy Policy
WashU Logo Rev 1c

Copyright 2018 © All Rights Reserved

WU HomeMaps & DirectionsSearch WUWU PoliciesEmerg. Info