Jump to ContentJump to Main Navigation
TaintedHow Philosophy of Science Can Expose Bad Science$

Kristin Shrader-Frechette

Print publication date: 2014

Print ISBN-13: 9780199396412

Published to Oxford Scholarship Online: October 2014

DOI: 10.1093/acprof:oso/9780199396412.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 27 October 2020

Understanding Uncertainty

Understanding Uncertainty

False Negatives in Quantitative Risk Analysis

Chapter:
(p.196) Chapter 14 Understanding Uncertainty
Source:
Tainted
Author(s):

Kristin Shrader-Frechette

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199396412.003.0014

Abstract and Keywords

The chapter assesses another typical scientific value judgment, minimizing false positives (false assertions of an effect), not false negatives (false denials of an effect), when both cannot be minimized, in situations of uncertainty. Contrary to standard scientific opinion, the chapter argues that in welfare-related, uncertain science, minimizing false negatives is scientifically and ethically superior.

Keywords:   false positive, false negative, quantitative risk analysis, uncertainty, value judgment

Are cell phones dangerous? The World Health Organization and International Agency for Research on Cancer say it is likely. They classify wireless radiofrequency-electromagnetic fields as possibly carcinogenic to humans because of increased brain cancers.1 Many neuro-oncologists likewise say they have confirmed a linear relationship between cell-phone usage and brain-tumor incidence, and therefore people should limit cell-phone exposure.2 Some epidemiologists, however, say that despite insufficient data, results do not suggest excess brain tumors from mobile phones.3 Similar controversies beset fracking—hydraulic and chemical fracturing of shale so as to extract natural gas. Scientists at the International Energy Agency say fracking can be done safely; independent, university-based scientists and physicians say that it cannot, that it contaminates groundwater and air quality. Indeed, nations like France have suspended all fracking.4

How should scientists respond, if harms like cell-phone carcinogenicity are uncertain? In situations of scientific uncertainty, what is the most defensible value judgment or default rule—the rule specifying who has the burden of proof and who should be assumed correct, in the absence of arguments to the contrary? If the previous chapter is correct, in situations involving uncertainty, potential societal catastrophe, and no overarching benefits, scientists ought to use the default rule of assessing their data in terms of maximin, not expected-utility rules. Using maximin would require scientists evaluating cell phones to protect those who are most sensitive to electromagnetic radiation, like children—at least in default cases where there are no compelling arguments to the contrary. Yet because it is more expensive for commercial interests to use this default rule to help protect extremely sensitive populations, doing so might increase market costs. Not using this default rule, therefore protecting children less, might decrease market costs.5

(p.197) Chapter Overview

This chapter argues that scientists facing situations having 5 characteristics—(1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal risks, (3) absence of overarching benefits, (4) the impossibility of avoiding both false positives and false negatives, and (5) the absence of compelling arguments to the contrary—should not follow the traditional scientific value judgment/default rule of minimizing false positives, false assertions of harmful effects. Instead, under conditions (1)–(5), scientists should follow the default rule of minimizing false negatives, false assertions of no harm. Thei chapter first reviews these 2 types of error, then shows how they are analogous, respectively, to default rules in civil and criminal law. Third, it makes the case for using the default rule of minimizing false negatives in situations involving conditions (1)–(5). Fourth, it answers objections to these arguments.

False-Positive and False-Negative Errors

When science includes legitimate and unresolved disagreement, scientists face uncertainty. For instance, some scientists say nano-particle-containing sunscreens are dangerous, while others say they are not.6 Some scientists say oral contraceptives are dangerous,7 while others say they are not.8 In such uncertain situations, false positives (type-I errors) occur when scientists reject a true null hypothesis, for example, “progestin-estradiol oral contraceptives have no increased-ovarian-cancer effects.” False negatives (type-II errors) occur when one fails to reject a false null hypothesis. Yet under conditions of scientific uncertainty, often it is statistically impossible for scientists to minimize both false positives and false negatives. Instead, they must make a value judgment—or use a default rule—about which type of error to minimize and about a testing pattern for their hypothesis. Typically they define the concept of statistical significance (see chapter 8) in terms of a false-positive risk of 0.01 or 0.05. That is, there is not more than a 1 in 100, or a 5 in 100, chance of committing a false-positive error.

Which error is more serious, false negatives or false positives? In law, an analogous issue is whether it is more serious to acquit a guilty person or to convict an innocent person. Should scientists run the risk of not recommending some scientific product that is really safe, or of recommending some scientific product—like cell phones—that is unsafe and could harm people? Decreasing commercial risks, by minimizing false positives, might hurt public health. Yet decreasing public risk, by minimizing false negatives, might hurt commercial profits.9

(p.198) Scientific Preferences for Minimizing False Positives

Most scientists have done hypothesis-testing so as to minimize false positives, to limit incorrect rejections of the no-effect hypothesis. To do so, they typically design experimental studies to guard against confounders, alternative possible causes. Thus, as chapter 8 illustrated, they demand replication of study results and often test for statistical significance. They do so because the scientific enterprise needs skepticism, rigorous reluctance to accept positive results. Otherwise, false claims would be claimed as science. As Abraham Kaplan put it, “The scientist usually attaches a greater loss to accepting a falsehood than to failing to acknowledge a truth. As a result, there is a certain conservatism or inertia in the scientific enterprise.”10

When both types of error cannot be avoided, apparent scientific preferences for false-negative or public risks also might arise because, as the previous chapter noted, most science is done by special interests that hope to profit from it. Special-interest scientists often underestimate product and pollution risks,11 partly because it is difficult to identify all hazards, they assume unidentified risks are zero, and their employers want to claim their products and pollutants are safe. As chapter 13 suggested, using the default rule of minimizing commercial risk also arises because scientific experts almost always use expected-utility decision rules, regardless of whether the situation meets the 3 criteria where maximin rules appear more appropriate.

Scientists’ preferences for the default rule of risking false-negative or public risks and for minimizing false-positive or commercial risks—when both cannot be avoided—is also consistent with standards of proof required in criminal cases, as opposed to cases in torts. Because US law requires juries in criminal cases to be sure beyond a reasonable doubt that a defendant is guilty before deciding against him, criminal standards of proof reveal preferences for false negatives, for innocence verdicts, for risking acquitting guilty people. In cases in torts, however, because US law requires juries to believe only that it is more probable, than not, that defendants are guilty, standards of proof—default rules—in civil cases reveal no preference for false negatives or false positives. Why the difference?

Judith Jarvis Thomson says that “in a criminal case, the society’s potential mistake-loss is very much greater than the society’s potential omission-loss.”12 That is, consequences to criminal defendants could include mistakes like execution. Nations also protect their moral legitimacy by minimizing false-positive verdicts in criminal cases. If they fail to convict the guilty, they commit a more passive, less reprehensible, wrong than if they convict the innocent. Thus, if standards of proof in cases of commercial or false-positive risks were analogous to those in criminal cases, society should minimize commercial and not public risk. Later paragraphs argue, however, that how scientists should behave in uncertain, potentially catastrophic scientific situations, are disanalogous to hypothesis-testing in pure science and to determining criminal guilt. Why?

Researchers doing pure science prefer the default to minimize false positives because it seems more scientifically conservative. It avoids positing an effect, for (p.199) instance, that a product causes cancer. Instead, it presupposes the null hypothesis is correct, for intance, that the product causes no cancer. In pure/basic science—without welfare consequences—it thus seems reasonable to claim that one should maximize truth and avoid false positives. However, chapter 13 argued that societal decisionmaking under uncertainty is disanalogous to pure-science decisionmaking because it also requires taking account of processes for recognizing ethical/legal obligations. When one moves from basic to policy-relevant science, what is rational moves from epistemological to both ethical and epistemological considerations. Thus, the default of minimizing false positives in basic science provides no rationale for doing so in policy-relevant science.

Civil law likewise exhibits no preference for minimizing false positives, although criminal law does. It protects the more vulnerable person. In policy-relevant-science cases, however, the public is more vulnerable than commercial interests because many could die from dangerous pollutants/products. Yet if scientists err in assuming some product/process is harmful when it is not, the main losses are economic, not fatalities. Therefore, if the aim of societal decisionmaking is to avoid more serious harms, cases of policy-relevant science are not analogous to criminal cases. Why not?

In policy-related science, the greater threats are to the public and false negatives, whereas the greater criminal-case threats are to defendants and false positives. Members of the public (as compared to producers/polluters) often have less information about the risks of welfare-affecting science, fewer financial resources to use in avoiding them, and greater difficulty exercising due-process rights after being harmed, because they bear the burden of proof.13 If so, welfare-related science requires protecting the public, the more vulnerable party, by using the default of minimizing false negatives, false assertions of harm, when scientists face situations having 5 characteristics—(1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal harm, (3) absence of overarching benefits, (4) unavoidability of both types of error, and (5) absence of compelling arguments to the contrary. Characteristics (4) and (5) merely state some of the conditions for defining something as a default rule, and subsequent sections defend characteristics (1)–(3) as situations that, together, argue for using the default rule of minimizing false negatives.

Minimizing False-Negative Risks in Practical Science

Obviously the decision whether to minimize false negatives or false positives must partly be made on a case-by-case basis. Therefore, this chapter argues for prima facie grounds for reducing public risk, false negatives, under conditions (1)–(5) above. That is, it argues that the person seeking to reduce commercial risks or false positives bears the burden of proof for accepting potentially catastrophic, uncertain, low-benefit, societal risks. These arguments show both that (p.200) public risks are the kinds most deserving of reduction, and that members of the public (who typically choose to minimize public risks) should be the main locus of decisionmaking regarding potentially catastrophic risks.

One reason to minimize false negatives in potentially catastrophic cases is that it is more important to protect the public from harm than to provide some societal benefit. Why? Protecting from harm is a necessary condition for enjoying other freedoms.14 Ethicist Jeremy Bentham, for instance, discussing liberalism, cautioned—as ethicist Robert Nozick and others might—that “the care of providing for his enjoyments ought to be left almost entirely to each individual; the principal function of government being to protect him from sufferings.”15 Although sometimes people cannot easily distinguish between providing benefits and protecting from harm, there is a general distinction between welfare and negative rights;16 between welfare laws that provide benefits and protective laws that prevent harms; between letting die versus killing; and between acts of omission and commission.17 Given such distinctions, because protecting people from harm is more important than providing them some good, protecting people from dangerous products is more important than providing risky products.

Another reason for minimizing false negatives in potentially catastrophic cases of practical science is that doing so protects the innocent public, whereas minimizing false positives would protect mainly those trying to profit from risky products. Because industrial producers, users, and implementers of science and technology—not the public— receive the bulk of science-and-technology benefits, they and not the public deserve to bear most associated risks.18

In addition, the public typically needs more risk protection than commercial interests. Laypeople usually have fewer financial resources and less information to cope with societal risks, especially because special interests may deny that risks to the public exist.19 A typical situation occurred in Japan, where the dangers of mercury poisoning were identified in 1940, and deaths were reported in 1948. Yet, the infamous Minimata poisoning occurred in 1953. Because of commercial/government indifference and repeated official denials of harm, government took action against mercury contamination only in the 1960s, 25 years after the first deaths occurred. Such cases, as well as economic and government financial incentives for ignoring public risks, suggest the public has greater need of protection.20

Minimizing public risks also seems reasonable because citizens have legal rights to protection against scientific-commercial decisions that could threaten their welfare, whereas risk imposers have no rights to profits from any products/pollutants. Citizens’ right to such protection are especially important because many potentially lethal risks typically are not fully compensated. Instead, consumers usually have 3 options regarding risks: prevention, transferal of loss, and risk retention. When citizens protect themselves by maintaining enough assets to rectify damages, they employ risk retention. When they use mechanisms like insurance and legal liability to transfer the risk, they protect themselves by passing (p.201) most of the loss to the insurer or liable party. Risk transfer obviously is more practically desirable than retention because it does not require people to retain idle, unproductive assets, to guard against damage. The moral desirability of risk transfer is that, if special interests harm someone, they and not the victim should be liable. The ethical responsibility of special interests thus provides grounds for removing financial responsibility from the victim. Insurance is a better vehicle for risk transfer than liability because insurance typically does not require victims to use costly, lengthy, legal remedies to obtain protection or compensation.21

However, the most ethical way for innocent victims to protect against science-related societal risks is prevention because it ties up no victim assets. Thus, in cases where those responsible cannot compensate harms they do to others, ethics requires those risks to be eliminated, especially if potential victims do not give free, informed consent to them. Judith Jarvis Thomson describes incompensable harms as those so serious that no money could compensate the victim. By this definition, death obviously is an incompensable harm. “However fair and efficient the judicial system may be,” because those who cause incompensable harms by their negligence “cannot square accounts with their victims,”22 they commit an ethically unjustifiable harm. But when are risks unjustifiable?

Of course, borderline cases are controversial—because potentially catastrophic technologies, like nuclear energy or industrial-chemical carcinogens, impose significant, potentially catastrophic, incompensable risks on others. Nevertheless, scientists assessing such cases should minimize false negatives. For instance, as earlier chapters noted, the US government admits that a nuclear accident could kill 150,000 people and that the core-melt probability for all existing/planned US commercial reactors is 1 in 4 during their lifetimes. Even worse, US citizens are prohibited by law from obtaining compensation from negligent nuclear utilities for more than 1–2 percent of losses from a worst-case, commercial-nuclear catastrophe. In most of the world, citizens have no commercial-nuclear-insurance protection at all. Because commercial-nuclear risks are both uncompensated and rejected by a majority of people in all nations except North Korea, they appear ethically indefensible. If so, they should be prevented, including through scientists’ minimizing false negatives in their assessments.23

Another reason for minimizing false negatives in practical, potentially catastrophic, welfare-related science is that many uncertain risks, for instance, million-year-hazardous-waste disposal, impose involuntary, uncompensated, thus unjustifiable harm. Besides, as chapter 13 argued, harm is justifiable only when it leads to greater good for all, including those most disadvantaged by it. If there is uncertainty about some science-related harm, obviously one could not show that imposing it would lead to greater good for all. Hence one could not justify imposing it.

Still another reason for minimizing false negatives in such circumstances is that doing so is often necessary to counter special-interest science, discussed earlier.24 (p.202) Chapter 7 revealed false-negative biases in all pesticide-manufacturer studies of chemical risks, submitted to the Environmental Protection Agency (EPA) for regulatory use. Yet such false-negative biases, especially small sample sizes, occur frequently in pharmaceutical, medical-devices, energy, pollution-related and other commercial scientific research because, as chapter 12 warned, most scientific work is funded by special interests. The result? Even when government decisions affect them, citizens often receive the best science money can buy, not truth. However, scientists can help counter this false-negative bias by minimizing false positives in welfare-affecting science that has potentially catastrophic consequences.

An Economics Objection

In response to the preceding arguments, suppose someone objects that scientists have duties, for the good of the economy, to minimize commercial risks and false positives, not public risks and false negatives under conditions (1)–(5) outlined earlier.25 This objection fails because, as chapter 13 argues, it would require using human beings as means to the end of economic growth, rather than as ends in themselves. Yet tort law, the US Constitution,26 basic ethics rules, and rights to bodily security prohibit using persons as means to the ends of other persons.27 Moreover, as chapter 12 revealed, expert-calculated-risk probabilities do not provide reliable biases for pursuing economic efficiency because experts have as many biases in estimating harm probabilities as laypeople.28 If so, scientists ought to minimize false negatives and public risk in cases characterized by conditions (1)–(5).

Besides, as chapters 4 and 12 illustrate, citizens should be protected against false-negative biases in assessments of potentially catastrophic technologies that could cripple the economy. For instance, the major pro-nuclear lobby, the Atomic Industrial Forum, admits that commercial-atomic-energy could not have survived without protection from normal market mechanisms and accident-liability claims.29 Yet nuclear accidents can cripple the economy, as chapter 12 and Fukushima show. Massively undercapitalized and unable to withstand liability claims, atomic energy and such capital-intensive technologies can both threaten the economy and jeopardize investments in clean technologies like solar-photovoltaic and wind.30 Still another problem with arguments to maximize public risk and minimize commercial risk is their inconsistency. Often they sanction interfering with market mechanisms to protect special interests, yet they reject market-regulatory interference in order to protect the public. If so, minimizing false positives and commercial risk is questionable in situations characterized by (1)–(5).31

Other problems also face those who minimize false positives so as to protect the economy more than human beings. Their reasoning is as invalid as analogous (p.203) arguments that abolishing slavery would destroy the economy of the South, or that giving equal rights to women would destroy the family, or that forcing automakers to put safety glass in windshields would bankrupt carmakers. Such arguments err because they pit cultural values—like family and economic well-being—against human rights to life and to equal rights, and they sanction discrimination. Judith Jarvis Thomson’s response to such arguments is that “it is morally indecent that anyone in a moderately well-off society should be faced with such a choice.”32 As chapter 13 argued, the only grounds justifying such discrimination against the people are that it will work to the advantage of everyone, including those discriminated against. Otherwise, such discrimination would amount to sanctioning the use of some humans as means to the ends of other humans.33 For all these reasons, in cases of uncertain, potentially catastrophic science having characteristics (1)–(5), the burden of proof should be on those attempting to put the public at risk by minimizing false positives.

The Public Should Accept or Reject Societal Risks

In cases of uncertain, potentially catastrophic, low-benefit science, the public and not scientists alone also should choose scientific value or default rules. Given democratic process, no one should impose risks on others without their free, informed consent. This dictum holds true in medical experimentation, and it applies to science and technology as well.34 Citizen and consumer sovereignty are justified by a revered ethical principle: No taxation without representation. As economist Tom Schelling notes, citizens have safeguarded this sovereignty by means of “arms, martyrdom, boycott,” the “inalienable right of the consumer to make his own mistakes.”35

A second reason for minimizing public risk and false negatives through citizen self-determination—not scientific paternalism that minimizes false positives—is its consistency with most ethical theories. In his classic discussion of liberty, John Stuart Mill argues that one ought to override individual decisionmaking only to protect others or keep them from selling themselves into slavery. Otherwise, says Mill, paternalism would be a dangerous infringement on individual autonomy.36 If so, paternalistic grounds never justify overriding public reluctance to accept uncertain, potentially catastrophic, scientific risks. In fact, arguments to minimize false positives and commercial risks, at the expense of the public, allege citizens are overprotective of themselves.37 Because such arguments sanction providing the public with less, not more, protection, largely for special-interest benefit, citizens need protection against such self-interested, ethically indefensible heavy-handedness.

As chapter 3 noted, recall that at least 50 percent of environmental-health scientists’ rights are violated by polluter harassment after they publish research that (p.204) suggests the need for greater pollutant regulation.38 Innocent scientists with fully corroborated research thus face harassment, including violence, merely for speaking the truth.39 Lawrence Livermore’s Benjamin Santer had a dead rat left at his chiming front door; University of Victoria’s Andrew Weaver had his computer stolen. Harvard University’s Mary Amdur lost her job, despite being later fully exonerated and her findings corroborated.40

Similar private-interest biases in favor of minimizing false positives or commercial risk characterize the scientific-regulatory context, as when the US government tried to regulate tobacco, dioxins, benzene, and dozens of other risks. Based on robust science, the US Occupational Safety and Health Administration (OSHA) first regulated benzene to 10 ppm/8 hours, then tightened these limits to 1 ppm/8 hours, partly because benzene has no safe dose. Yet after organizations such as the American Petroleum Institute filed petitions, the Supreme Court set aside benzene regulations, claiming OSHA did not show significant risk from benzene.41 The tobacco industry likewise fought scientists’ cancer-tobacco link by using special-interest science to deny it, create uncertainty, and claim controversy about scientific findings.42

Likewise, in the 1950s scientists showed dioxins caused chick-edema disease, killing millions of chickens in the Midwest. Later, scientists showed serious dioxin health harms from spraying Agent Orange in the Vietnam War in the 1960s. Yet Dow and Monsanto apparently used special-interest science to thwart dioxin regulations until 1994.43 And as soon as the US EPA proposed new standards in 1987 to protect people living near steel mills from coke-oven emissions’ causing lung cancer, special-interest scientists criticized EPA recommendations and argued that the regulations “would weaken the [steel] industry” because they had “very high” costs. These special-interest-science claims were even more apparent when steel-industry scientists said the EPA was “unjustified” in proposing regulations that could save on1e person in 1000 from avoidable, premature cancer induced by steel-mill emissions. They wrote:

An increase of one cancer per 1000 residents . . . represents only a 2 percent increase in the cancer rate. This rate is too small to detect using epidemiology. Is a 2 percent or smaller increase in the lung cancer rate for the most exposed population worth all the effort?...The EPA approach is an arbitrary one . . . unjustified.44

This flawed reasoning of special-interest scientists presupposes that, in order to benefit steel manufacturers financially, it is acceptable to kill 1 person per 1000. Yet government typically regulates all involuntarily imposed risks that are higher than 1 fatality per million.45 This means special-interest scientists were trying to impose a steel-emissions risk on the public that was 3 orders of magnitude greater than those typically prohibited. Moreover, it is false for these scientists to claim that a 1-in-1000-cancer increase “is too small to detect using epidemiology.” It (p.205) also is ethically question-begging for them to justify their false-negative biases and impose risks on innocent citizens in order to increase steel-industry profits.

To the preceding remarks, some scientists might object that, because the public wants the benefits associated with scientific risks like steel mills, nuclear power, sunscreens, and contraceptives, therefore the public wants scientists to minimize false positives, false assertions of harms. However, extensive evidence shows that citizens who bear high levels of pollution risk do not consent to it and recognize that it reduces their welfare.46 If so, special-interest scientists may want to minimize false positives so as to impose their risks on unconsenting citizens.

A final reason for minimizing false negatives, in cases with characteristics (1)–(5), is that doing so is less likely to lead to socio-political unrest than minimizing false-positive risks. Otherwise, accidents, costly publicity, and civil disobedience occur. The long controversy over the Seabrook, New Hampshire, commercial-nuclear plant illustrates the costs of such civil unrest.47 Even if special-interest scientists were better judges of practical-science risks than laypeople,48 giving experts exclusive franchises for science-related decisionmaking would mean substituting short-term efficiency for long-term equal treatment, consent, and democracy.49 Such substitutions make no sense—if science is intended to promote social welfare and not merely private profits. As a US National Academy of Sciences’ committee notes, when one ignores public preferences for minimizing potentially catastrophic, uncertain societal risks, the costs always outweigh the benefits. In such cases, what is practical and prudent is also the most ethical: taking account of public preferences to minimize false negatives.50

Conclusion

What should scientists do in policy-relevant situations characterized by (1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal harm, (3) absence of overarching benefits, (4) unavoidability of both types of error, and (5) no compelling arguments to the contrary? This chapter argued that in circumstances of (1)–(5), scientists should not follow the traditional scientific value judgment of minimizing false positives but instead should minimize false negatives, false assertions of no harm. That is, they should take account of ethical obligations, not merely scientific considerations. Moreover, because choosing default rules—to use in situations of scientific uncertainty—is a matter of values, the public and not scientists alone should help choose them. As Thomas Jefferson put it, “I know no safe depository of the ultimate powers of the society but the people . . . If we think them not enlightened enough to exercise their control . . . the remedy is not to take it from them, but to inform their discretion by education.”51

Notes:

(1) . IARC, IARC Classifies Radiofrequency Electromagnetic Fields as Possibly Carcinogenic (Lyon, France: International Agency for Research on Cancer, 2011), http://tinyurl.com/3sya7sy, accessed July 21, 2012. Earlier versions of some arguments here appeared in Kristin Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1993), 131–145; hereafter cited as Shrader-Frechette, RR.

(2) . S. Lehrer, S. Green, and R. Stock, “Association Between Number of Cell Phone Contracts and Brain Tumor Incidence in Nineteen U.S. States,” Journal of Neuro-oncology 101, no. 3 (2011): 505–507.

(3) . M. H. Repacholi, A. Lerchl, M. Röösli, Z. Sienkiewicz, A. Auvinen, J. Breckenkamp, G. d’Inzeo, P. Elliott, P. Frei, S. Heinrich, I. Lagroye, A. Lahkola, D. McCormick, S. Thomas, and P. Vecchia, “Systematic Review of Wireless Phone Use and Brain Cancer,” Bioelectromagenetics 33, no. 3 (April 2012): 187–206.

(4) . International Energy Agency, Golden Rules for a Golden Age of Natural Gas (Paris: IEA, 2012); Valerie Brown, “Industry Issues,” Environmental Health Perspectives 115, no. 2 (February 2007): A76.

(5) . L. Lave and B. Leonard, “Regulating Coke Oven Emissions,” in The Risk Assessment of Environmental and Human Health Hazards, ed. D. J. Paustenbach, 1064–1081 (New York: Wiley, 1989). See also C. Starr, R. Rudman, and C. Whipple, “Philosophical Basis for Risk Analysis,” Annual Review of Energy 1 (1976): 629–662.

(6) . Danger claims are in T. Smijs and S. Pavel, “Titanium Dioxide and Zinc Oxide Nanoparticles in Sunscreens,” Nanotechnology, Science and Applications 4, (2011): 95–112; D. Tran and R. Salmon, “Potential Photocarcinogenic Effects of Nanoparticle Sunscreens,” Australian Journal of Dermatology 52, no. 1 (February 2011): 1–6. Denials of danger claims are in M. Burnett and S. Wang, “Current Sunscreen Controversies,” Photodermatology, Photoimmunology, & Photomedicine 27, no. 2 (2011): 58–67. L. L. Lin, J. E. Grice, et al., “Time-Correlated Single Photon Counting for Simultaneous Monitoring of Zinc Oxide Nanoparticles,” Pharmaceutical Research 28, no. 11 (2011): 2920–2930.

(7) . E.g., M. Etminan, J. Delaney, B. Bressler, and J. Brophy, “Oral Contraceptives and the Risk of Gallbladder Disease,” Canadian Medical Association Journal 183, no. 8 (2011): 899–904; Ø. Lidegaard, E. Løkkegaard, A. Jensen, C. Skovlund, and N. Keiding, “Thrombotic Stroke and Myocardial Infarction with Hormonal Contraception,” New England Journal of Medicine 366 (2012): 2257–2266.

(8) . P. Hannaford, “Epidemiology of the Contraceptive Pill and Venous Thromboembolism,” Thrombosis Research 127, no. 3 (February 2011): S30–S34; E. Raymond, A. Burke, and E. Espey, “Combined Hormonal Contraceptives and Venous Thromboembolism,” Obstetrics and Gynecology 119, no. 5 (2012): 1039–1044.

(9) . C. W. Churchman, Theory of Experimental Inference (New York: Macmillan, 1947); See S. Axinn, “The Fallacy of the Single Risk,” Philosophy of Science 33, nos. 1–2 (1966): 154–162; J. Rossi, “The Prospects for Objectivity in Risk Assessment,” Journal of Value Inquiry 6, no. 2 (2012): 237–253.

(10) . A. Kaplan, The Conduct of Inquiry (San Francisco: Chandler, 1964), 253.

(11) . H. Leung and D. Paustenbach, “Assessing Health Risks in the Workplace,” in Paustenbach, Risk Assessment, 689–710 (regarding denying dioxin risks).

(12) . J. J. Thomson, Rights, Restitution, and Risk (Cambridge, MA: Harvard University Press, 1986).

(13) . P. Ricci and A. Henderson, “Fear, Fiat, and Fiasco,” in Phenotypic Variation in Populations, ed. A. Woodhead, M. Bender, and R. Leonard (New York: Plenum, 1988), 285–293. Cox and P. Ricci, “Risk, Uncertainty, and Causation,” in Paustenbach, Risk Assessment, 125–156. C. Cranor, Legally Poisoned (Cambridge, MA: Harvard University Press, 2011).

(14) . See, for example, H. Shue, “Exporting Hazards,” in Boundaries, ed. P. Brown and H. Shue (Totowa, NJ: Rowman and Littlefield, 1981), 107–145; J. Lichtenberg, “National Boundaries and Moral Boundaries,” in Brown and Shue, Boundaries, 79–100.

(15) . J. Bentham, Principles of the Civil Code, in The Works of Jeremy Bentham, ed. J. Bowring (New York: Russell and Russell, 1962), 1:301.

(16) . L. Becker, “Rights,” in Property, ed. L. Becker and K. Kipnis (Englewood Cliffs, NJ: Prentice-Hall, 1984), 76. See H. Stuart, “United Nations Convention on the Rights of Persons with Disabilities,” Current Opinion in Psychiatry 25, no. 5 (2012): 365–369.

(17) . J. Bentham, Principles of Morals and Legislation, in Bowring, Works, 1:36; J. Feinberg, Social Philosophy (Englewood Cliffs, NJ: Prentice-Hall, 1973), 29, 59; J. Rachels, “Euthanasia,” in Matters of Life and Death, ed. T. Regan (New York: Random House, 1980), 38.

(18) . L. Cox and P. Ricci, “Legal and Philosophical Aspects of Risk Analysis,” in Paustenbach; hereafter cited as: LP, Risk Analysis, 22–26. See also W. Hoffman and J. Fisher, “Corporate Responsibility,” in Becker and Kipnis, Property, 211–220.

(19) . For example, the lead industry blamed child illnesses from lead-paint poisoning on poor parental care. See D. Rosner and G. Markowitz, “A Problem of Slum Dwellings and Relatively Ignorant Parents,” Environmental Justice 1, no. 3 (December 2008): 159–168. Thomas McGarity and Wendy Wagner, Bending Science (Cambridge, MA: Harvard University Press, 2010). See David Michaels, Doubt Is Their Product (New York: Oxford University Press, 2008). N. Oreskes and E. Conway, Merchants of Doubt (New York: Bloomsbury Press, 2010). K. S. Shrader-Frechette, What Will Work (New York: Oxford University Press, 2011), ch. 4; hereafter cited as Shrader–Frechette, WWW.

(20) . Stephen John, “Security, Knowledge and Well-being,” Journal of Moral Philosophy 8, no. 1 (2011): 68–91.

(21) . See A. C. Michalos, Foundations of Decisionmaking (Ottowa: Canadian Library of Philosophy, 1987), 202ff.; and H. S. Denenberg, R. D. Eilers, G. W. Hoffman, C. A. Kline, J. J. Melone, and H. W. Snider, Risk and Insurance (Englewood Cliffs, NJ: Prentice-Hall, 1964). See also Cox and Ricci, LP, Risk Analysis, 1035. A. Zia and M. Glantz, “Risk Zones,” Journal of Comparative Policy Analysis 14, no. 2 (2012): 143–159.

(22) . Thomson, Rights, 158.

(23) . See K. S. Shrader-Frechette, Nuclear Power and Public Policy (Boston: Reidel, 1983), 74–78; hereafter cited as Shrader-Frechette, NP; Shrader-Frechette, WWW, ch. 4.

(24) . K. Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007); hereafter cited as Shrader-Frettchette, TASL; S. Krimsky, Science in the Private Interest (Lanham, MD: Rowman & Littlefield, 2003); Shrader-Frechette, WWW; Michaels, Doubt Is Their Product; Sharon Beder, Global Spin (Glasgow, UK: Green Books, 2002).

(25) . See previous chapter; Harsanyi, “Maximin Principle”; L. Maxim, “Problems Associated with the Use of Conservative Assumptions in Exposure and Risk Analysis,” in Paustenbach, Risk Assessment, 539–555; Lave and Leonard, “Coke Oven Emissions,” 1068–1069; S. Hoffmann, “Overcoming Barriers to Integrating Economic Analysis into Risk Assessment,” Risk Analysis 31, no. 9 (September 2011): 1345–1355; and S. Sgourev, “The Dynamics of Risk in Innovation,” Industrial and Corporate Change (July 2012): 1–27.

(26) . Shrader-Frechette, NP, 33–35; L. Wasserman, “Students’ Freedom from Excessive Force by Public School Officials?” Kansas Journal of Law and Public Policy 21 (2011): 35.

(27) . W. Frankena, “The Concept of Social Justice,” in Social Justice, ed. R. Brandt (Englewood Cliffs, NJ: Prentice-Hall, 1962), 10, 14; Shue, “Exporting Hazards”; and Lichtenberg, “National Boundaries.”

(28) . D. Eddy, “Probabilistic Reasoning in Clinical Medicine,” in Judgment under Uncertainty, ed. D. Kahneman, P. Slovic, and A. Tversky (Cambridge: Cambridge University Press, 1982), 267. See also the following articles in this collection: S. Oskamp, “Overconfidence in Case-Study Judgments,” 292; P. Slovic, B. Fischhoff, and S. Lichtenstein, “Facts vs. Fears,” 475. D. Kahneman, A. Tversky, Choices, Values, and Frames (Cambridge: Cambridge University Press, 2000); D. Kahneman, Thinking Fast and Slow (New York: Straus and Giroux, 2011).

(29) . Shrader-Frechette, NP, chs. 1, 4. See P. Huber, “The Bhopalization of American Tort Law,” in Hazards, ed. Robert Kates, John Ahearne, Ronald Bayer, Ella Bingham, Victor Bond, Daniel Hoffman, Peter Huber, Roger Kasperson, John Klacsmann, and Chris Whipple (Washington, DC: National Academy Press, 1986), 94–95, 106–107; Shrader-Frechette, WWW, ch. 4.

(30) . See A. B. Lovins and J. H. Price, Non-Nuclear Futures (New York: Harper and Row, 1975); C. Flavin, Nuclear Power: The Market Test (Washington, DC: Worldwatch, 1983); Shrader-Frechette, WWW.

(31) . See Cooke, “Risk Assessment,” 345–347. Shrader-Frechette, WWW, ch. 4.

(32) . Thomson, Rights, 172.

(33) . See Frankena, “Concept of Social Justice,” 15; and Shrader-Frechette, RR, ch. 8. Heidi Li Feldman, “What’s Right About the Medical Model in Human Subjects Research Regulation,” Georgetown Law Faculty Publications and Other Works (2012): 1097.

(34) . See Shrader-Frechette, RR, chs. 2 and 3.

(35) . T. Schelling, Choice and Consequence (Cambridge, MA: Harvard University Press, 1984), 145–146. B. Leoni, Law, Liberty, and the Competitive Market (New Brunswick, NJ: Transaction Publishers, 2009); J. Aldred, The Skeptical Economist (Sterling, VA: Routledge, 2010), ch. 2.

(36) . See J. S. Mill, On Liberty (Buffalo, NY: Prometheus Books, 1986), esp. 16. Shrader-Frechette, RR, ch. 10; J. S. Purdy and N. Siegal, “The Liberty of Free Riders,” American Journal of Law and Medicine 38, no. 2-3 (2012): 374; R. Skipper, “Obesity,” Public Health Ethics 5, no. 2 (2012): 181–191; V. Devinatz, “Reevaluating US Company Paternalism,” Labor History, 53, no. 2 (2012): 299–304.

(37) . Cass Sunstein, Risk and Reason (New York: Cambridge University Press, 2002); Cass Sunstein, Laws of Fear (New York: Cambridge University Press, 2005); Kristin Shrader-Frechette, “Review of Sunstein, Laws of Fear,” Ethics and International Affairs 20, no.1 (2006): 123–125; Kristin Shrader-Frechette, “Review of Sunstein, Risk and Reason,” Ethics 114, no. 2 (January 2004): 376–380; Kristin Shrader-Frechette, “Review of Sunstein, Risk and Reason,” Quarterly Review of Biology (December 2003).

(38) . McGarity and Wagner, Bending Science; Michaels, Doubt Is Their Product.

(39) . Suzanne Goldenberg, “US Senate’s Top Climate Sceptic Accused of Waging ‘McCarthyite Witch-Hunt’,” The Guardian (March 1, 2010), www.guardian.co.uk/environment/2010/march/01/inhofe-climate-mccarthyite, accessed on February 3, 2012. See Raymond Bradley, Global Warming and Political Intimidation (Amherst: University of Massachusetts Press, 2011).

(40) . Shrader-Frechette, TASL; Cheryl Hogue, “Scientists are being INTIMIDATED AND HARASSED Because of Their Research,” Chemical and Engineering News 88, no. 23 (June 7, 2010): 31–32; Devra Davis, When Smoke Ran Like Water (New York: Basic, 2002); Beder, Global Spin, esp. 108.

(41) . Ralph H. Luken and Stephen G. Miller, “The Benefits and Costs of Regulating Benzene,” Journal of the Air Pollution Control Association 31, no. 12 (1981): 1254–1259.

(42) . K. Brownell and K. Warner, “The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died,” Milbank Quarterly 87, no. 1 (March 2009): 259–294.

(43) . R. Hites, “Dioxins,” Environmental Science and Technology 45, no. 1 (2011): 16–20.

(44) . Lave and Leonard, “Coke Oven Emissions,” 1068–1069.

(45) . Lave and Leonard, “Coke Oven Emissions,” 1071–1078.

(46) . See Shrader-Frechette, RR, ch. 5.

(47) . Brian Emmet, Linda Perron, and Paolo Ricci, “The Distribution of Environmental Quality,” in Environmental Assessment, ed. D. Burkhardt and W. Ittelson (New York: Plenum, 1978), 367–374; Shrader-Frechette, TASL; S. Vanderheiden, “Confronting Risks,” Environmental Politics 20, no. 5 (2011): 650–667; C. Engeman, L. Baumgartner, B. Carr, A. Fish, J. Meyerhofer, T. Satterfield, P. Holden, and B. Harthorn, “Governance Implications of Nanomaterials Companies’ Inconsistent Risk Perceptions and Safety Practices,” Journal of Nanoparticle Research 14, no. 3 (2012): 749.

(48) . C. Starr, “General Philosophy of Risk-Benefit Analysis,” in Energy and the Environment, ed. H. Ashley, R. Rudman, and C. Whipple (Elmsford, NY: Pergamon Press, 1976), 16. See Cox and Ricci, “Legal and Philosophical Aspects.”

(49) . Slovic, “Facts vs. Fears,” 488.

(50) . National Research Council, Understanding Risk (Washington, DC: National Academy Press, 1996), 133ff.

(51) . Thomas Jefferson, “Thomas Jefferson to William C. Jarvis, 1820,” The Writings of Thomas Jefferson, ed. Andrew Lipscomb and Albert Bergh (Washington, DC: Thomas Jefferson Memorial Association, 1903–1904), 15: 278.