Issue Spotting: Risk Regulation

How should risk assessments be made? What is the appropriate response to risk? How are risks communicated and mitigated?

In 2015, a World Health Organization (WHO) study announced findings that processed meats, such as bacon, should be considered in the same class as more infamous carcinogens like cigarettes. Following their proclamation, WHO desperately attempted to dispel sensationalists headlines equating eating pigs and smoking cigs as their report only reported that the two were similar only in that WHO was confident in each’s effect on one’s risk of cancer but not that the magnitude of each’s were the same. As Spiegelhalter explains in his primer on risk communication, processed meats have a consistent but minor effect on overall cancer rates compared to cigarettes.

A significant challenge and tension inherent to science policy is how to identify, assess, and appropriately manage risks that arise in society. Here, we dive into several aspects of risk that pose a challenge to evaluating science policy including identifying the appropriate response to risks, communicating risks, and defining risk itself.

The first complication presented by risk assessment, is the amorphous definition of risk assessment itself. We understand risks can be present in all aspects of life, but just as ubiquitous as the concept may seem are the myriad definitions used across different agencies, industries, and the public. Generally speaking, however, most agencies and industries use a common means of measuring risk as the product of an unfortunate events’ likelihood of occurring multiplied by the severity its consequence. However, beyond this generic formula, the definitions of risk’s constituent parts are also open to debate. For instance, measurements used in this formula can be defined qualitatively, as in the risk matrix below, or quantitative methods can be devised and argued over.

In our personal lives, these differences may be of little consequence since we can accommodate and respond to our own perceptions of risk. You may pack your sunscreen for the beach and while another brings a life jacket. However, from the perspective of science policy making, the consequences become much more dire as someone must make the decision of what risks the government is willing to accommodate for its citizens and how it chooses to do so.

As an example, consider the conflicting values and risk tolerances present in the recent push in healthcare for patients’ “right to try” medicines that have not yet been officially approved through the Food and Drug Administration’s approval process. Among the many factors present in this tension are medical researchers’ uncertainty of the drugs they’ve produced and the willingness of some patients to risk side-effects or undesired consequences to improving their livelihood. Evident on both sides of the issue are people concerned with patient well-being but drastically different risk tolerances and appreciations of the costs associated with managing risk.

In democratic societies, a natural inclination would be to let the public decide the appropriate role and response of the government to assess risks that arise in society. In many cases, this is exactly what plays out as we choose to elect (or not) representatives who create and support regulatory bodies such as the National Highway and Traffic Safety Administration. Elsewhere, the public may also make this choice in more specific and local settings such as when city zoning ordinances to allocate space for hospitals, playgrounds, neighborhoods, and nuclear waste dumping grounds (I’ll have some buffer, please).

But how certain can we be of who knows best? Consider the following example presented by Supreme Court Justice Stephen Breyer in his review of incipient challenges of risk regulation in his 1993 book, “Breaking the Vicious Cycle”. In his example, Justice Breyer identified two studies conducted in 1987 reviewing respondents perceptions of environmental risks to the country. In one study, the general public was queried and in the other, experts at the EPA were interviewed, the results were a near inversion of each other. What gives?!

Other Tensions in Risk Policy

Justice Breyer’s example provides yet another challenge faced by policy makers and analysts when deciding whose or what risks ought to be considered when making sound science policy. On one hand, policy makers ought to reflect the beliefs of those they represent. However, what the public understands as a significant risk can be obscured by their lack of expertise in the myriad of sources of risk that arise in society.

What’s more, behavior researchers such as Nobel Laureate Daniel Kahneman have also articulated pervasive, predictable, and ultimately problematic heuristics, i.e., shorthand ways of making conclusions, that the public uses to judge risks in their lives. For instance, some adults may have a disproportionate fear of flying due to biased perceptions of the risks after giving too much credence to easily memorable events such as recent crashes abroad or the terrorist attacks on 9/11. Whereas compared to much more prevalent, though seemingly benign events, like climbing a ladder or simply driving a car, one is significantly safer in the air. The Availability Heuristic, as this cognitive error is called, and others recorded in Kahneman and other cognitive behaviorists’ research can easily contribute to a false assessment of risks in society.

For another example of tension when assessing and managing risks, consider how many experts can be overcome by what’s known as “tunnel vision”. Here, certain risks are so zealously pursued that incidental costs and consequences of certain risk mitigation outweigh the initial risks being avoided. For instance, as Breyer suggests in “Vicious Cycles”, the Environmental Protection Agency’s interest in removing harmful toxins from land used for public parks and recreation. In many cases, the costs and resources needed by the Agency to remove the toxins most likely to harm the public have an inverse relationship with the amount of toxins being removed. That is, the EPA could spend relatively few resources cleaning an area to eliminate 99% of harmful risks by the remaining toxins. However, an incredible amount of resources may be required to completely eradicate the remaining 1%. In such cases, there can be a lot of debate determining the appropriate degree of risk mitigation considering not only the consequence and likelihood of risks, but also costs or other risk trade-offs.

How does this play into issue spotting? As we’ve reviewed, there are numerous tensions that arise throughout the risk identification, assessment, and management processes. While handling risks is of vital importance to individuals and society, these tensions ought to prompt keys questions of any risk-focused policies. As you encounter or create policies, consider the table above to see key questions to help identify tensions as you encounter risk policies.

References and Further Reading

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131.
  • Spiegelhalter, D. (2017). Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, 31-60.
  • Breyer, S. (2009). Breaking the vicious circle: Toward effective risk regulation. Harvard University Press.
  • Hansson, Sven Ove, and Terje Aven. “Is risk analysis scientific?.” Risk Analysis 34.7 (2014): 1173-1183.
  • Stirling, A. (2007). Risk, precaution and science: towards a more constructive policy debate. EMBO reports, 8(4), 309-
    315.