… Plus 6 Steps to Enhanced Assurance
The audit profession is facing unprecedented demands, but there are a host of tools available to help. James Bone outlines the benefits to automating audit tasks.
Internal audit is under increasing pressure across many quarters from challenges to audit objectivity, ethical behavior and requests to reduce or modify audit findings. “More than half of North American Chief Audit Executives (CAEs) said they had been directed to omit or modify an important audit finding at least once, and 49 percent said they had been directed not to perform audit work in high-risk areas.” That’s according to a report by The Institute of Internal Auditors (IIA) Research Foundation, based on a survey of 494 CAEs and some follow-up interviews.
Challenges to audit findings are a normal part of the process for clarifying risks associated with weakness in internal controls and gaps that expose the organization to threats. However, the opportunity to reduce subjectivity and improve audit consistency is critical to minimizing second guessing and enhanced credibility. One of the ways to improve audit consistency and objectivity is to reframe the business case for audit automation.
Audit automation provides audit professionals with the tools to reduce focus on low-risk, high-frequency areas of risk. Automation provides a means for detecting changes in low-risk, high-frequency areas of risk to monitor the velocity of high-frequency risks that may lead to increased exposures or development of new risks.
More importantly, the challenges to audit findings associated with low-frequency, high-impact risks (less common) typically deals with an area of uncertainty that is harder to justify without objective data. Uncertainty or “unknown unknowns” are the hardest risks to justify using the subjective point-in-time audit methodology. Uncertainty, by definition, requires statistical and predictive methods that provide auditors with an understanding of the distribution of probabilities, as well as the correlations and degrees of confidence associated with risk. Uncertainty or probability management provides auditors with next-level capabilities to discuss risks that are elusive to nail down. Automation provides internal auditors with the tools to shape the discussion about uncertainty more clearly and to understand the context for when these events become more prevalent.
Risk communications is one of the biggest challenges for all oversight professionals.According to an article in Harvard Business Review,
“We tend to be overconfident about the accuracy of our forecasts and risk assessments and far too narrow in our assessment of the range of outcomes that may occur. Organizational biases also inhibit our ability to discuss risk and failure. In particular, teams facing uncertain conditions often engage in groupthink: Once a course of action has gathered support within a group, those not yet on board tend to suppress their objections — however valid — and fall in line.”
Everyone in the organization has a slightly different perception of risk that is influenced by heuristics developed over a lifetime of experience. Heuristics are mental shortcuts individuals use to make decisions. Most of the time, our heuristics work just fine with the familiar problems we face. Unfortunately, we do not recognize when our biases mislead us in judging more complex risks. In some cases, what appears to be lapses in ethical behavior may simply be normal human bias, which may lead to different perceptions of risk. How does internal audit overcome these challenges?
The Opportunity Cost of Not Automating
Technology is not a solution, in and of itself; it is an enabler of staff to become more effective when integrated strategically to complement staff strengths and enhance areas of opportunity to improve. Automation creates situational awareness of risks. Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness (SA) in audit is not a one-size-fits-all proposition. In some organizations, SA involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real time. Situational awareness reduces human error by making sense of the environment with objective data.
Research is growing demonstrating that human error is the biggest cause of risk in a wide range of organizations, from IT security to health care and organizational performance. The opportunity to reduce human error and to improve insights into operational performance is now possible with automation. Chief Audit Officers have the opportunity to lead in collaboration with operations, finance, compliance and risk management on automation that supports each of the key stakeholders who provide assurance.
Collaboration on automation reduces redundancies for data requests, risk assessments, compliance reviews and demands on IT departments. Smart automation integrates oversight into operations, reduces human error, improves internal controls and creates situational awareness where risks need to be managed. These are the opportunity costs of not automating.
A Pathway to Enhanced Assurance
Audit automation has become a diverse set of solutions offered by a range of providers but that point alone should not drive the decision to automate. Developing a coherent strategy for automation is the key first step. Whether you are a Chief Audit Officer starting to consider automation or you and your team are well-versed in automation platforms, it may be a good time to rethink audit automation, not as a one-off budget item, but as a strategic imperative to be integrated into operations focused on the things that the board and senior executives think are important. This will require the organization to see audit as integral to operational excellence and business intelligence. Reframing the role of audit through automation is the first step toward enhanced assurance.
Auditors are taught to be skeptical while conducting attestation engagements; however, there is no statistical definition for assurance. Assurance requires the use of subjective judgments in the risk assessment process that may lead to variability in the quality of audits between different people within the same audit function. According to ISACA’s IS Audit and Assurance Guideline 2202 Risk Assessment in Planning, Risk Assessment Methodology 2.2.4, “all risk assessment methodologies rely on subjective judgments at some point in the process (e.g., for assigning weights to the various parameters). Professionals should identify the subjective decisions required to use a particular methodology and consider whether these judgments can be made and validated to an appropriate level of accuracy.” Too often these judgments are difficult to validate with a repeatable level of accuracy without quantifiable data and methodology.
Scientific methods are the only proven way to develop degrees of confidence in risk assessment and correlations between cause and effect. “In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.” The only way to adequately reduce the risk of sampling error is to automate sampling data. Trending sample data helps auditors detect seasonality and other factors that occur as a result of the ebb and flow of business dynamics.
A Pathway to Enhanced Assurance
- Identify the greatest opportunities to automate routine audit processes.
- Prioritize automation projects each budget cycle in coordination with operations, risk management, IT and compliance as applicable.
- Prioritize projects that leverage data sources that optimize automation projects across multiple stakeholders (operational data used by multiple stakeholders). One-offs can be integrated over time as needed.
- Develop a secondary list of automation projects that allow for monitoring, business intelligence and confidentiality.
- Design automation projects with levels of security that maintain the integrity of the data based on users and sensitivity of the data.
- Consider the questions most important to senior executives.
“Look, I have got a rule, General Powell ‘As an intelligence officer, your responsibility is to tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think. But you [should] always keep those three separated.”
– Tim Weiner reporting in the New York Times about wisdom former Director of National Intelligence Mike McConnell learned from General Colin Powell
The business case for audit automation has never been stronger given the demands on internal audit. Today, the tools are available to reduce waste, improve assurance, validate audit findings and provide for enhanced audit judgment on the risks that really matter to management and audit professionals.
James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance Week, Corporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.
He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.
Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?
James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.
The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.
Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?
Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?
The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.
Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?
Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.
On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.
The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.
Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?
Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.
The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.
Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.
Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.
When we think of hacking we think of a network being hacked remotely by a computer nerd sitting in a bedroom using code she’s written to steal personal data, money or just to see if it is possible. The idea of a character breaking network security to take control of law enforcement systems has been imprinted in our psyche from images portrayed in TV crime shows however the real story is much more complex and simple in execution.
The idea behind a cognitive hack is simple. Cognitive hack refers to the use of a computer or information system [social media, etc.] to launch a different kind of attack. The sole intent of a cognitive attack relies on its effectiveness to “change human users’ perceptions and corresponding behaviors in order to be successful.” Robert Mueller’s indictment of 13 Russian operatives is an example of a cognitive hack taken to the extreme but demonstrates the effectiveness and subtleties of an attack of this nature.
Mueller’s indictment of an elaborately organized and surprisingly low-cost “troll farm” set up to launch an “information warfare” operation to impact U.S. political elections from Russian soil using social medial platforms is extraordinary and dangerous. The danger of these attacks is only now becoming clear but it is also important to understand the simplicity of a cognitive hack. To be clear, the Russian attack is extraordinary in scope, purpose and effectiveness however these attacks happen every day for much more mundane purposes.
Most of us think of these attacks as email phishing campaigns designed to lure you to click on an unsuspecting link to gain access to your data. Russia’s attack is simply a more elaborate and audacious version to influence what we think, how we vote and foment dissent between political parties and the citizenry of a country. That is what makes Mueller’s detailed indictment even more shocking. Consider for example how TV commercials, advertisers and, yes politicians, have been very effective at using “sound bites” to simplify their product story to appeal to certain target markets. The art of persuasion is a simple way to explain a cognitive hack which is an attack that is focused on the subconscious.
It is instructive to look at the Russian attack rationally from its [Russia’s] perspective in order to objectively consider how this threat can be deployed on a global scale. Instead of spending billions of dollars in a military arms race, countries are becoming armed with the ability to influence the citizens of a country for a few million dollars simply through information warfare. A new more advanced cadre of computer scientists are being groomed to defend and build security for and against these sophisticated attacks. This is simply an old trick disguised in 21st century technology through the use of the internet.
A new playbook has been refined to hack political campaigns and used effectively around the world as documented in an article March, 2016. For more than 10 years, elections in Latin America have become a testing ground for how to hack an election. The drama in the U.S. reads like one episode of a long running soap opera complete with “hackers for hire”, “middle-men”, political conspiracy and sovereign country interference.
“Only amateurs attack machines; professionals target people.”
Now that we know the rules have changed what can be done about this form of cyber-attack? Academics, government researchers and law enforcement have studied this problem for decades but the general public is largely unaware of how pervasive the risk is and the threat it imposes on our society and the next generation of internet users.
I wrote a book, Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind to chronicle this risk and proposed a cognitive risk framework to bring awareness to the problem. Much more is needed to raise awareness by every organization, government official and risk professionals around the world. A new cognitive risk framework is needed to better understand these threats, identify and assess new variants of the attack and develop contingencies rapidly.
Social media has unwittingly become a platform of choice for nation state hackers who can easily hide the identify of organizations and resources involved in these attacks. Social media platforms are largely unregulated and therefore are not required to verify the identity and source of funding to set up and operate these kinds of operations. This may change given the stakes involved.
Just as banks and other financial services firms are required to identify new account owners and their source of funding technology providers of social media sites may also be used as a venue for raising and laundering illicit funds to carry out fraud or attacks on a sovereign state. We now have explicit evidence of the threat this poses to emerging and mature democracies alike.
Regulation is not enough to address an attack this complex and existing training programs have proven to be ineffective. Traditional risk frameworks and security measures are not designed to deal with attacks of this nature. Fortunately, a handful of information security professionals are now considering how to implement new approaches to mitigate the risk of cognitive hacks. The National Institute of Standards and Technology (NIST), is also working on an expansive new training program for information security specialists specifically designed to understand the human element of security yet the public is largely on its own. The knowledge gap is huge and the general public needs more than an easy to remember slogan.
A national debate is needed between industry leaders to tackle security. Silicon Valley and the tech industry, writ large, must also step up and play a leadership role in combatting these attacks by forming self-regulatory consortiums to deal with the diversity and proliferation of cyber threats through vulnerabilities in new technology launches and the development of more secure networking systems. The cost of cyber risk is far exceeding the rate of inflation and will eventually become a drag on corporate earnings and national growth rates as well. Businesses must look beyond the “insider threat” model of security risk and reconsider how the work environment contributes to risk exposure to cyberattacks.
Cognitive risks require a new mental model for understanding “trust” on the internet. Organizations must begin to develop new trust measures for doing business over the internet and with business partners. The idea of security must also be expanded to include more advanced risk assessment methodologies along with a redesign of the human-computer interaction to mitigate cognitive hacks.
Cognitive hacks are asymmetric in nature meaning that the downside of these attacks can significantly outweigh the benefits of risk-taking if not addressed in a timely manner. Because of the asymmetric nature of a cognitive hack attackers seek the easiest route to gain access. Email is one example of a low cost and very effective attack vector which seeks to leverage the digital footprint we leave on the internet.
Imagine a sandy beach where you leave footprints as you walk but instead of the tide erasing your footprints they remain forever present with bits of data about you all along the way. Web accounts, free Wi-Fi networks, mobile phone apps, shopping websites, etc. create a digital profile that may be more public than you realize. Now consider how your employee’s behavior on the internet during work connects back to this digital footprint and you are starting to get an idea of how simple it is for hackers to breach a network.
A cognitive risk framework begins with an assessment of Risk Perceptions related to cyber risks at different levels of the firm. The risk perceptions assessment creates a Cognitive Mapof the organization’s cyber awareness. This is called Cognitive Governance and is the first of five pillars to manage asymmetric risks. The other five pillars are driven from the findings in the cognitive map.
A cognitive map uncovers the blind spots we all experience when a situation at work or on the internet exceeds our experience with how to deal with it successfully. Natural blind spots are used by hackers to deceive us into changing one’s behavior to click a link, a video, a promotional ad or even what we read. Trust, deception and blind spots are just a few of the tools we must incorporate into a new toolkit called the cognitive risk framework.
There is little doubt that Mueller’s investigation into the sources and methods used by the Russians to influence the 2016 election will reveal more surprises but one thing is no longer in doubt…the Russians have a new cognitive weapon that is deniable but still traceable, for now. They are learning from Mueller’s findings and will get better.
“In 1981, Carl Landwehr observed that “Without a precise definition of what security means and how a computer can behave, it is meaningless to ask whether a particular computer system is secure.”[i]
Researchers George Cybenko, Annarita Giani, and Paul Thompson of Dartmouth College introduced the term “Cognitive Hack” in 2002 in an article entitled, “Cognitive Hacking, a Battle for the Mind”. “The manipulation of perception —or cognitive hacking—is outside the domain of classical computer security, which focuses on the technology and network infrastructure.”[i] This is why existing security practice is no longer effective at detecting, preventing or correcting security risks, like cyber attacks.
Almost 40 years after Landwehr’s warning cognitive hacks have become the most common tactic used by more sophisticated hackers or advanced persistent threats. Cognitive hacks are the least understood and operate below human conscious awareness allowing these attacks to occur in plain sight. To understand the simplicity of these attacks one need look no further than the evening news. The Russian attack on the Presidential election is the best and most obvious example of how effective these attacks are. In fact, there is plenty of evidence that these attacks were refined in elections of emerging countries over many years.
A March 16, 2016 article in Bloomberg, “How to Hack an Election” chronicled how these tactics were used in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala, and Venezuela long before they were used in the American elections.
“Cognitive hacking [Cybenko, Giani, Thompson, 2002] can be either covert, which includes the subtle manipulation of perceptions and the blatant use of misleading information, or overt, which includes defacing or spoofing legitimate norms of communication to influence the user.” The reports of an army of autonomous bots creating “fake news” or, at best, misleading information in social media and popular political websites is a classic signature of a cognitive hack.
Cognitive hacks are deceptive and highly effective because of a basic human bias to believe in those things that confirm our own long held beliefs or beliefs held by peer groups whether social, political or collegial. Our perception is “weaponized” without our knowledge or full understanding we are being manipulated. Cognitive hacks are most effective in a networked environment where “fake news” can be picked up in social media sites as trending news or “viral” campaigns encouraging even more readers to be influenced by the attacks without any sign an attack has been orchestrated. In many cases, the viral nature of the news is a manipulation through the use of an army of autonomous bots on various social media sites.
At its core the manipulation of behavior has been in use for years in the form of marketing, advertisements, political campaigns and in times of war. In the Great World Wars, patriotic movies were produced to keep public spirits up or influence the induction of volunteers to join the military to fight. ISIS has been extremely effective using cognitive hacks to lure an army of volunteers to their Jihad even in the face of the perils of war. We are more susceptible than we believe which creates our vulnerability to cyber risks and allows the risk to grow unabated in the face of huge investments in security. Our lack of awareness to these threats and the subtlety of the approach make cognitive hacks the most troubling in security.
I wrote the book, “Cognitive Hack, The New Battleground in Cybersecurity.. the Human Mind”, to raise awareness of these threats. Security professionals must better understand how these attacks work and the new vulnerabilities they create to employees, business partners and organizations alike. But more importantly, these threats are growing in sophistication and vary significantly requiring security professionals to rethink assurance in their existing defensive posture.
The sensitivity of the current investigation into political hacks by the House and Senate Intelligence Committees may prevent a full disclosure of the methods and approaches used however recent news accounts leave little doubt to their effect as described more than 14 years ago by researchers and more recently in Paris and Central and South American elections. New security approaches will require a much better understanding of human behavior and collaboration from all stakeholders to minimize the impact of cognitive hacks.
I proposed a simple set of approaches in my book however security professionals must begin to educate themselves of this new, more pervasive threat and go beyond simple technology solutions to defend their organization against them. If you are interested in receiving research or other materials about this risk or approaches to address them please feel free to reach out.
[i] C.E. Landwehr, “Formal Models of Computer Security,” Computing Survey, vol. 13, no. 3, 1981, pp. 247-278.
I am extremely honored to have received an email from the Society for Risk Analysis after the submission of my Cognitive Risk Framework for Cybersecurity and Enterprise Risk Management. I have been invited to present my research in Cape Town, South Africa next May at the World Congress meeting for peer review.
This Guide explains fundamental aspects of human behaviour, which together constitute what the commercial maritime sector calls ‘the human element’.
It makes clear that the human element is neither peripheral nor optional in the pursuit of a profitable and safe shipping industry. On the contrary, the capabilities and vulnerabilities of human beings are – and always will be – at the centre of the enterprise.
The Guide clearly shows that managing the human element must take place simultaneously at all levels of the industry – from within the engine rooms and decks of the smallest cargo ships to the conventions of the regulation makers and the boardrooms of the business strategists. It is the policies and strategies that shape and constrain the space in which ships and their crews operate.
The 2016 Data Breach Investigations Report (DBIR), Verizon’s ninth annual report, revealed some grim news—the human threat vector is more dangerous than ever. The latest DBIR reaffirmed the fact that employees continued to play a major role in many of the breaches in the past year. Some 63 percent of confirmed breaches involved weak, default or stolen passwords. Worse, miscellaneous error—staff sending information to the wrong person—accounted for nearly 18 percent of breaches.1 Despite a wealth of preventive measures, employees remain one of the costliest vectors in a number of data breaches and security incidents, which are increasing at an alarming rate.
Two environmental accidents in different parts of the world — along with media and public reaction to them — have dramatically illustrated some of the basic psychological principles of risk perception. In 2010, the Deepwater Horizon oil spill sent millions of gallons of oil into the Gulf of Mexico. In 2011, the Fukushima Daiichi nuclear power plant in Japan — damaged after a devastating earthquake and tsunami — leaked radiation into the atmosphere.
These incidents dominated news coverage for weeks and created widespread anxiety, even in people living miles away and not directly affected. For example, news that potassium iodide pills could help prevent radiation-induced thyroid cancer sparked a run on pharmacy supplies in the United States, thousands of miles away from the disaster, even when there was no evidence of increased radiation exposure.
Factors affecting perception
Risk perception is rarely entirely rational. Instead, people assess risks using a mixture of cognitive skills (weighing the evidence, using reasoning and logic to reach conclusions) and emotional appraisals (intuition or imagination). After reviewing the research, risk expert David Ropeik identified 14 specific factors that affect perception of danger:
Trust. When people trust the officials providing information about a particular risk — or the process used to assess risk — they tend to be less afraid than when they don’t trust the officials or the process.
Origin. People are less concerned about risks they incur themselves than the ones that others impose on them. This helps explain why people often get angry when they see someone talking on the cell phone while driving — and yet think nothing of doing so themselves.
Control. Perceived control over outcomes also matters. This helps explain why someone is not afraid of driving a car — even though automobile crashes kill thousands of people each year — but may be afraid of flying in an airplane.
Nature. Dangers in nature — such as sun exposure — are perceived as relatively benign, whereas man-made harms — nuclear power accidents or terror attacks — are more menacing.
Scope. Cataclysmic events, capable of killing many people at the same time, are scarier than chronic conditions — which may kill just as many people but over a longer period. That helps explain why a tsunami or earthquake feels scarier than heart disease or diabetes.
Awareness. Saturation media coverage of high-profile disasters raises awareness of particular risks more than others. Likewise, an event that hits close to home, such as having a friend diagnosed with cancer, heightens risk perception.
Imagination. When threats are invisible or hard to understand, people become confused about the nature of the risk, and the event becomes scarier.
Dread. Events that invoke dread — such as drowning or being eaten alive — scare people more than those that do not.
Age affected. Risks are more frightening when they affect children. Asbestos in a school building, for example, may bother people more than asbestos in a factory.
Uncertainty. Events inspire more fear when officials don’t communicate what is known — or when the risks are simply unknown. In the Deepwater Horizon spill, for example, officials could more easily estimate the amount of oil spewing into the ocean than they could predict what effect that would have on wildlife and fisheries.
Familiarity. Novel risks are perceived to be more dangerous than more familiar threats. That’s why West Nile virus may be perceived as more of a risk to health than not testing a smoke detector regularly.
Specificity. Victims who are publicly identified evoke a greater emotional reaction than those who remain nameless and faceless.
Personal impact. Risks that affect people personally are more frightening than those that affect strangers.
Fun factor. Engaging in risky behavior may not seem that way if it involves pleasure. Some examples are drug taking, unsafe sex, and high-risk sports.
Risk in perspective
There is no question that people living in the direct vicinity of high-profile disasters suffer mentally as well as physically. Hurricane Katrina, for example, was followed by an increase in psychiatric disorders, substance abuse, and domestic violence among people living in the areas affected.
For people who are affected indirectly by reading media reports, however, the real danger is heightened or exaggerated perception of risk that may not have a solid basis in fact. Keeping the risk in perspective will help prevent needless anxiety or counterproductive coping strategies.
Ropeik D. “Understanding Factors of Risk Perception,” Nieman Reports (Winter 2002).
Slovic P, et al. “Affect, Risk, and Decision Making,” Health Psychology (July 2005): Vol. 24, No. 4 Suppl., pp. S35–40.
Yun K, et al. “Moving Mental Health into the Disaster-Preparedness Spotlight,” New England Journal of Medicine (Sept. 23, 2010): Vol. 363, No. 13, pp. 1193–95.
Ironically, as our society and other industrialized nations have expended great effort to make life safer and healthier, many in the public have become more, rather than less, concerned about risk. These individuals see themselves as exposed to more serious risks than were faced by people in the past, and they believe that this situation is getting worse rather than better. Nuclear and chemical technologies (except for medicines) have been stigmatized by being perceived as entailing unnaturally great risks (Gregory et al., 1995). As a result, it has been difficult, if not impossible, to find host sites for disposing of high-level or low-level radioactive wastes, or for incinerators, landfills, and other chemical facilities.
Public perceptions of risk have been found to determine the priorities and legislative agendas of regulatory bodies such as the Environmental Protection Agency, much to the distress of agency technical experts who argue that other hazards deserve higher priority. The bulk of EPA’s budget in recent years has gone to hazardous waste primarily because the public believes that the cleanup of Superfund sites is one of the most serious environmental priorities for the country. Hazards such as indoor air pollution are considered more serious health risks by experts but are not perceived that way by the public (United States, 1987).
Great disparities in monetary expenditures designed to prolong life, as shown by Tengs et al. (1995), may also be traced to public perceptions of risk. Such discrepancies are seen as irrational by many harsh critics of public perceptions. These critics draw a sharp dichotomy between the experts and the public. Experts are seen as purveying risk assessments, characterized as objective, analytic, wise, and rational-based on the In contrast, the public is seen to rely on that are subjective, often hypothetical, emotional, foolish, and irrational (see, e.g. Covello et al., 1983; DuPont, 1980). Weiner (1993) defends this dichotomy, arguing that “This separation of reality and perception is pervasive in a technically sophisticated society, and serves to achieve a necessary emotional distance . . .” (p. 495).
In sum, polarized views, controversy, and overt conflict have become pervasive within risk assessment and risk management. A desperate search for salvation through risk-communication efforts began in the mid-l980s-yet, despite some localized successes, this effort has not stemmed the major conflicts or reduced much of the dissatisfaction with risk management. This dissatisfaction can be traced, in part, to a failure to appreciate the complex and socially determined nature of the concept “risk.” In the remainder of this paper, I shall describe several streams of research that demonstrate this complexity and point toward the need for new definitions of risk and new approaches to risk management.
The Subjective and Value-Laden Nature of Risk Assessment
Attempts to manage risk must confront the question: “What is risk?” The dominant conception views risk as “the chance of injury, damage, or loss” (Webster, 1983). The probabilities and consequences of adverse events are assumed to be produced by physical and natural processes in ways that can be objectively quantified by risk assessment. Much social science analysis rejects this notion, arguing instead that risk is inherently subjective (Funtowicz and Ravetz, 1992; Krimsky and Golding, 1992; Otway, 1992; Pidgeon et al., 1992; Slovic, 1992; Wynne, 1992). In this view, risk does not exist “out there,” independent of our minds and cultures, waiting to be measured. Instead, human beings have invented the concept to help them understand and cope with the dangers and uncertainties of life. Although these dangers are real, there is no such thing as “real risk” or “objective risk.” The nuclear engineer’s probabilistic risk estimate for a nuclear accident or the toxicologist’s quantitative estimate of a chemical’s carcinogenic risk are both based on theoretical models, whose structure is subjective and assumption-laden, and whose inputs are dependent on judgment. As we shall see, nonscientists have their own models, assumptions, and subjective assessment techniques (intuitive risk assessments), which are sometimes very different from the scientists’ models.
One way in which subjectivity permeates risk assessments is in the dependence of such assessments on judgments at every stage of the process, from the initial structuring of a risk problem to deciding which endpoints or consequences to include in the analysis, identifying and estimating exposures, choosing dose-response relationships, and so on. For example, even the apparently simple task of choosing a risk measure for a well-defined endpoint such as human fatalities is surprisingly complex and judgmental. Table 1 shows a few of the many different ways that fatality risks can be measured. How should we decide which measure to use when planning a risk assessment, recognizing that the choice is likely to make a big difference in how the risk is perceived and evaluated?
An example taken from Crouch and Wilson (1982), demonstrates how the choice of one measure or another can make a technology look either more or less risky. For example, between 1950 and 1970, coal mines became much less risky in terms of deaths from accidents per ton of coal, but they became marginally riskier in terms of deaths from accidents per employee. Which measure one thinks more appropriate for decision making depends on one’s point of view. From a national point of view, given that a certain amount of coal has to be obtained to provide fuel, deaths per million tons of coal is the more appropriate measure of risk, whereas from a labor leader’s point of view, deaths per thousand persons employed may be more relevant.
Each way of summarizing deaths embodies its own set of values (National Research Council, 1989). For example, “reduction in life expectancy” treats deaths of young people as more important than deaths of older people, who have less life expectancy to lose. Simply counting fatalities treats deaths of the old and young as equivalent; it also treats as equivalent deaths that come immediately after mishaps and deaths that follow painful and debilitating disease. Using “number of deaths” as the summary indicator of risk implies that it is as important to prevent deaths of people who engage in an activity by choice and have been benefiting from that activity as it is to protect those who are exposed to a hazard involuntarily and get no benefit from it. One can easily imagine a range of arguments to justify different kinds of unequal weightings for different kinds of deaths, but to arrive at any selection requires a value judgment concerning which deaths one considers most undesirable. To treat the deaths as equal also involves a value judgment.
The Multidimensionality of Risk
As will be shown in the next section, research has found that the public has a broad conception of risk, qualitative and complex, that incorporates considerations such as uncertainty, dread, catastrophic potential, controllability, equity, risk to future generations, and so forth, into the risk equation (Slovic, 1987). In contrast, experts’ perceptions of risk are not closely related to these dimensions or the characteristics that underlie them. Instead, studies show that experts tend to see riskiness as synonymous with probability of harm or expected mortality, consistent with the ways that risks tend to be characterized in risk assessments (see, for example, Cohen, 1985). As a result of these different perspectives, many conflicts over “risk” may result from experts and laypeople having different definitions of the concept. In this light, it is not surprising that expert recitations of “risk statistics” often do little to change people’s attitudes and perceptions.
There are legitimate, value-laden issues underlying the multiple dimensions of public risk perceptions, and these values need to be considered in risk-policy decisions. For example, is risk from cancer (a dreaded disease) worse than risk from auto accidents (not dreaded)? Is a risk imposed on a child more serious than a known risk accepted voluntarily by an adult? Are the deaths of 50 passengers in separate automobile accidents equivalent to the deaths of 50 passengers in one airplane crash? Is the risk from a polluted Superfund site worse if the site is located in a neighborhood that has a number of other hazardous facilities nearby? The difficult questions multiply when outcomes other than human health and safety are considered.
Studying Risk Perceptions: the psychometric paradigm
Just as the physical, chemical, and biological processes that contribute to risk or reduce risk can be studied scientifically, so can the processes affecting risk perceptions.
One broad strategy for studying perceived risk is to develop a taxonomy for hazards that can be used to understand and predict responses to their risks. A taxonomic scheme might explain, for example, people’s extreme aversion to some hazards, their indifference to others, and the discrepancies between these reactions and experts’ opinions. The most common approach to this goal has employed the psychometric paradigm (Fischhoff et al., 1978; Slovic et al., 1984), which uses psychophysical scaling and multivariate analysis techniques to produce quantitative representations of risk attitudes and perceptions. Within the psychometric paradigm, people make quantitative judgments about the current and desired riskiness of diverse hazards and the desired level of regulation of each. These judgments are then related to judgments about other properties, such as (i) the hazard’s status on characteristics that have been hypothesized to account for risk perceptions and attitudes (for example, voluntariness, dread, knowledge, controllability), (ii) the benefits that each hazard provides to society, (iii) the number of deaths caused by the hazard in an average year, (iv) the number of deaths caused by the hazard in a disastrous year, and (v) the seriousness of each death from a particular hazard relative to a death due to other causes.
Numerous studies carried out within the psychometric paradigm have shown that perceived risk is quantifiable and predictable. Psychometric techniques seem well suited for identifying similarities and differences among groups with regard to risk perceptions and attitudes (see Table 2). They have also shown that the concept “risk” means different things to different people. When experts judge risk, their responses correlate highly with technical estimates of annual fatalities. Lay people can assess annual fatalities if they are asked to (and produce estimates somewhat like the technical estimates). However, their judgments of risk are related more to other hazard characteristics (for example, catastrophic potential, threat to future generations) and, as a result, tend to differ from their own (and experts’) estimates of annual fatalities.
Various models have been advanced to represent the relationships between perceptions, behavior, and these qualitative characteristics of hazards. As we shall see, the picture that emerges from this work is both orderly and complex.
Factor-analytic representations. Psychometric studies have demonstrated that every hazard has a unique pattern of qualities that appears to be related to its perceived risk. Figure 1 shows the mean profiles across nine characteristic qualities of risk that emerged for nuclear power and medical x-rays in an early study (Fischhoff et al., 1978). Nuclear power was judged to have much higher risk than x-rays and to need much greater reduction in risk before it would become “safe enough.” As the figure illustrates, nuclear power also had a much more negative profile across the various risk characteristics.
Many of the qualitative risk characteristics that make up a hazard’s profile tend to be highly correlated with each other, across a wide range of hazards. For example, hazards rated as “voluntary” tend also to be rated as “controllable” and “well-known;” haz-ards that appeared to threaten future generations tend also to be seen as having catastrophic potential, and so on. Investigation of these interrelationships by means of factor analysis has indicated that the broader domain of characteristics can be condensed to a small set of higher-order characteristics or factors.
The factor space presented in Figure 2 has been replicated across groups of lay people and experts judging large and diverse sets of hazards. Factor 1, labeled “dread risk,” is defined at its high (right hand) end of perceived lack of control, dread, catastrophic potential, fatal consequences, and the inequitable distribution of risks and benefits. Nuclear weapons and nuclear power score highest on the characteristics that make up this factor. Factor 2, labeled “unknown risk,” is defined at its high end by hazards judged to be unobservable, unknown, new, and delayed in their manifestation of harm. Chemical and DNA technologies score particularly high on this factor. A third factor, reflecting the number of people exposed to the risk, has been obtained in several studies.
Research has shown that laypeople’s risk perceptions and attitudes are closely related to the position of a hazard within the factor space. Most important is the factor “Dread Risk.” The higher a hazard’s score on this factor (i.e., the further to the right it appears in the space), the higher its perceived risk, the more people want to see its current risks reduced, and the more they want to see strict regulation employed to achieve the desired reduction in risk. In contrast, experts’ perceptions of risk are not closely related to any of the various risk characteristics or factors derived from these characteristics. Instead, experts appear to see riskiness as synonymous with expected annual mortality (Slovic et al., 1979). As a result, many conflicts about risk may result from experts and laypeople having different definitions of the concept.
Perceptions Have Impacts: the social amplification of risk
Perceptions of risk play a key role in a process labeled social amplification of risk (Kasperson et al., 1988). Social amplification is triggered by the occurrence of an adverse event, which could be a major or minor accident, a discovery of pollution, an outbreak of disease, an incident of sabotage, and so on. Risk amplification reflects the fact that the adverse impacts of such an event sometimes extend far beyond the direct damages to victims and property and may result in massive indirect impacts such as litigation against a company or loss of sales, increased regulation of an industry, and so on. In some cases, all companies within an industry are affected, regardless of which company was responsible for the mishap. Thus, the event can be thought of as a stone dropped in a pond. The ripples spread outward, encompassing first the directly affected victims, then the responsible company or agency, and, in the extreme, reaching other companies, agencies, or industries (See Figure 3). Examples of events resulting in extreme higher-order impacts include the chemical manufacturing accident at Bhopal, India, the disastrous launch of the space shuttle Challenger, the nuclear-reactor accidents at Three Mile Island and Chernobyl, the adverse effects of the drug Thalidomide, the Exxon Valdez oil spill, the adulteration of Tylenol capsules with cyanide, and, most recently, the deaths of several individuals from anthrax. An important feature of social amplification is that the direct impacts need not be too large to trigger major indirect impacts. The seven deaths due to the Tylenol tampering resulted in more than 125,000 stories in the print media alone and inflicted losses of more than one billion dollars upon the Johnson & Johnson Company, due to the damaged image of the product (Mitchell, 1989). The cost of dealing with the anthrax threat will be far greater than this.
It appears likely that multiple mechanisms contribute to the social amplification of risk. First, extensive media coverage of an event can contribute to heightened perceptions of risk and amplified impacts (Burns et al., 1990). Second, a particular hazard or mishap may enter into the agenda of social groups, or what Mazur (1981) terms the partisans, within the community or nation. The attack on the apple growth-regulator “Alar” by the Natural Resources Defense Council demonstrates the important impacts that special-interest groups can trigger (Moore, 1989).
A third mechanism of amplification arises out of the interpretation of unfortunate events as clues or signals regarding the magnitude of the risk and the adequacy of the risk-management process (Burns et al., 1990; Slovic, 1987). The informativeness or signal potential of a mishap, and thus its potential social impact, appears to be systematically related to the perceived characteristics of the hazard. An accident that takes many lives may produce relatively little social disturbance (beyond that caused to the victims’ families and friends) if it occurs as part of a familiar and well-understood system (e.g., a train wreck). However, a small incident in an unfamiliar system (or one perceived as poorly understood), such as a nuclear waste repository or a recombinant DNA laboratory, may have immense social consequences if it is perceived as a harbinger of future and possibly catastrophic mishaps.
One implication of the signal concept is that effort and expense beyond that indicated by a cost-benefit analysis might be warranted to reduce the possibility of “high-signal events.” Unfortunate events involving hazards in the upper right quadrant of Figure 2 appear particularly likely to have the potential to produce large ripples. As a result, risk analyses involving these hazards need to be made sensitive to these possible higher order impacts. Doing so would likely bring greater protection to potential victims as well as to companies and industries.
Sex, Politics, and Emotion in Risk Judgments
Given the complex and subjective nature of risk, it should not surprise us that many interesting and provocative things occur when people judge risks. Recent studies have shown that factors such as gender, race, political worldviews, affiliation, emotional affect, and trust are strongly correlated with risk judgments. Equally important is that these factors influence the judgments of experts as well as the judgments of laypersons.
Sex is strongly related to risk judgments and attitudes. Several dozen studies have documented the finding that men tend to judge risks as smaller and less problematic than do women. A number of hypotheses have been put forward to explain these differences in risk perception. One approach has been to focus on biological and social factors. For example, women have been characterized as more concerned about human health and safety because they give birth and are socialized to nurture and maintain life (Steger and Witt, 1989). They have been characterized as physically more vulnerable to violence, such as rape, for example, and this may sensitize them to other risks (Baumer, 1978; Riger et al., 1978). The combination of biology and social experience has been put forward as the source of a “different voice” that is distinct to women (Gilligan, 1982; Merchant, 1980).
A lack of knowledge and familiarity with science and technology has also been suggested as a basis for these differences, particularly with regard to nuclear and chemical hazards. Women are discouraged from studying science and there are relatively few women scientists and engineers (Alper, 1993). However, Barke et al., (1997) have found that female physical scientists judge risks from nuclear technologies to be higher than do male physical scientists. Similar results with scientists were obtained by Slovic et al., (1997) who found that female members of the British Toxicological Society were far more likely than male toxicologists to judge societal risks as moderate or high. Certainly the female scientists in these studies cannot be accused of lacking knowledge and technological literacy. Something else must be going on.
Hints about the origin of these sex differences come from a study by Flynn et al., (1994) in which 1,512 Americans were asked, for each of 25 hazard items, to indicate whether the hazard posed (1) little or no risk, (2) slight risk, (3) moderate risk, or (4) high risk to society. The percentage of “high-risk” responses was greater for women on every item. Perhaps the most striking result from this study is shown in Figure 4, which presents the mean risk ratings separately for White males, White females, non-White males, and non-White females. Across the 25 hazards, White males produced risk-perception ratings that were consistently much lower than the means of the other three groups.
Although perceived risk was inversely related to income and educational level, controlling for these differences statistically did not reduce much of the White-male effect on risk perception.
When the data underlying Figure 4 were examined more closely, Flynn et al. observed that not all White males perceived risks as low. The “White-male effect” appeared to be caused by about 30% of the White-male sample who judged risks to be extremely low. The remaining White males were not much different from the other subgroups with regard to perceived risk.
What differentiated these White males who were most responsible for the effect from the rest of the sample, including other White males who judged risks as relatively high? When compared to the remainder of the sample, the group of White males with the lowest risk-perception scores were better educated (42.7% college or postgraduate degree vs. 26.3% in the other group), had higher household incomes (32.1% above $50,000 vs. 21.0%), and were politically more conservative (48.0% conservative vs. 33.2%).
Particularly noteworthy is the finding that the low risk-perception subgroup of White males also held very different attitudes from the other respondents. Specifically, they were than the others to:
- Agree that future generations can take care of themselves when facing risks imposed on them from today’s technologies (64.2% vs. 46.9%).
- Agree that if a risk is very small it is okay for society to impose that risk on individuals without their consent (31.7% vs. 20.8%).
- Agree that science can settle differences of opinion about the risks of nuclear power (61.8% vs. 50.4%).
- Agree that government and industry can be trusted with making the proper decisions to manage the risks from technology (48.0% vs. 31.1 %).
- Agree that we can trust the experts and engineers who build, operate, and regulate nuclear power plants (62.6% vs. 39.7%).
- Agree that we have gone too far in pushing equal rights in this country (42.7% vs. 30.9%).
- Agree with the use of capital punishment (88.2% vs. 70.5%).
- Disagree that technological development is destroying nature (56.9% vs. 32.8%).
- Disagree that they have very little control over risks to their health (73.6% vs. 63.1%).
- Disagree that the world needs a more equal distribution of wealth (42.7% vs. 31.3%).
- Disagree that local residents should have the authority to close a nuclear power plant if they think it is not run properly (50.4% vs. 25.1%).
- Disagree that the public should vote to decide on issues such as nuclear power (28.5% vs. 16.7%).
In sum, the subgroup of White males who perceive risks to be quite low can be characterized by trust in institutions and authorities and by anti-egalitarian attitudes, including a disinclination toward giving decision-making power to citizens in areas of risk management.
The results of this study raise new questions. What does it mean for the explanations of gender differences when we see that the sizable differences between White males and White females do not exist for non-White males and non-White females? Why do a substantial percentage of White males see the world as so much less risky than everyone else sees it?
Obviously, the salience of biology is reduced by these data on risk perception and race. Biological factors should apply to non-White men and women as well as to White men and women. The present data thus move us away from biology and toward sociopolitical explanations. Perhaps White males see less risk in the world because they create, manage, control, and benefit from many of the major technologies and activities. Perhaps women and non-White men see the world as more dangerous because in many ways they are more vulnerable, because they benefit less from many of its technologies and institutions, and because they have less power and control over what happens in their communities and their lives. Although the survey conducted by Flynn, Slovic, and Mertz was not designed to test these alternative explanations, the race and gender differences in perceptions and attitudes point toward the role of power, status, alienation, trust, perceived government responsiveness, and other sociopolitical factors in determining perception and acceptance of risk.
To the extent that these sociopolitical factors shape public perception of risks, we can see why traditional attempts to make people see the world as White males do, by showing them statistics and risk assessments, are often unsuccessful. The problem of risk conflict and controversy goes beyond science. It is deeply rooted in the social and political fabric of our society.
Risk Perception, Emotion, and Affect
The studies described in the preceding section illustrate the role of worldviews as orienting mechanisms. Research suggests that emotion is also an orienting mechanism that directs fundamental psychological processes such as attention, memory, and information processing. Emotion and worldviews may thus be functionally similar in that both may help us navigate quickly and efficiently through a complex, uncertain, and sometimes dangerous world.
The discussion in this section is concerned with a subtle form of emotion called affect, defined as a positive (like) or negative (dislike) evaluative feeling toward an external stimulus (e.g., some hazard such as cigarette smoking). Such evaluations occur rapidly and automatically-note how quickly you sense a negative affective feeling toward the stimulus word “hate” or the word “cancer.”
Support for the conception of affect as an orienting mechanism comes from a study by Alhakami and Slovic (1994). They observed that, whereas the risks and benefits to society from various activities and technologies (e.g., nuclear power, commercial aviation) tend to be associated in the world, they are correlated in people’s minds (higher perceived benefit is associated with lower perceived risk; lower perceived benefit is associated with higher perceived risk). Alhakami and Slovic found that this inverse relationship was linked to people’s reliance on general affective evaluations when making risk/benefit judgments. When the affective evaluation was favorable (as with automobiles, for example), the activity or technology being judged was seen as having high benefit and low risk; when the evaluation was unfavorable (e.g., as with pesticides), risks tended to be seen as high and benefits as low. It thus appears that the affective response is primary, and the risk and benefit judgments are derived (at least partly) from it.
Finucane et al. (2000) investigated the inverse relationship between risk and benefit judgments under a time-pressure condition designed to limit the use of analytic thought and enhance the reliance on affect. As expected, the inverse relationship was strengthened when time pressure was introduced. A second study tested and confirmed the hypothesis that providing information designed to alter the favorability of one’s overall affective evaluation of an item (say nuclear power) would systematically change the risk and benefit judgments for that item. For example, providing information calling people’s attention to the benefits provided by nuclear power (as a source of energy) depressed people’s perception of the risks of that technology. The same sort of reduction in perceived risk occurred for food preservatives and natural gas, when information about their benefits was provided. Information about risk was also found to alter perception of benefit. A model depicting how reliance upon affect can lead to these observed changes in perception of risk and benefit is shown in Figure 5.
Slovic et al. (1991a, b) studied the relationship between affect and perceived risk for hazards related to nuclear power. For example, Slovic, Flynn, and Layman asked respondents “What is the first thought or image that comes to mind when you hear the phrase ‘nuclear waste repository?”‘ After providing up to three associations to the repository stimulus, each respondent rated the affective quality of these associations on a five-point scale, ranging from extremely negative to extremely positive.
Although most of the images that people evoke when asked to think about nuclear power or nuclear waste are affectively negative (e.g., death, destruction, war, catastrophe), some are positive (e.g., abundant electricity and the benefits it brings). The affective values of these positive and negative images appear to sum in a way that is predictive of our attitudes, perceptions, and behaviors. If the balance is positive, we respond favorably; if it is negative, we respond unfavorably. For example, the affective quality of a person’s associations to a nuclear waste repository was found to be related to whether the person would vote for or against a referendum on a nuclear waste repository and to their judgments regarding the risk of a repository accident. Specifically, more than 90% of those people whose first image was judged very negative said that they would vote against a repository in Nevada; fewer than 50% of those people whose first image was positive said they would vote against the repository (Slovic et al., 1991a).
Using data from the national survey of 1,500 Americans described earlier, Peters and Slovic (1996) found that the affective ratings of associations to the stimulus “nuclear power” were highly predictive of responses to the question: “If your community was faced with a shortage of electricity, do you agree or disagree that a new nuclear power plant should be built to supply that electricity?” Among the 25% of respondents with the most positive associations to nuclear power, 69% agreed to building a new plant. Among the 25% of respondents with the most negative associations, only 13% agreed.
The Importance of Trust
The research described above has painted a portrait of risk perception influenced by the interplay of psychological, social, and political factors. Members of the public and experts can disagree about risk because they define risk differently, have different worldviews, different affective experiences and reactions, or different social status. Another reason why the public often rejects scientists’ risk assessments is lack of trust. Trust in risk management, like risk perception, has been found to correlate with gender, race, worldviews, and affect.
Social relationships of all types, including risk management, rely heavily on trust. Indeed, much of the contentiousness that has been observed in the risk-management arena has been attributed to a climate of distrust that exists between the public, industry, and risk-management professionals (e.g., Slovic, 1993; Slovic et al., 1991a). The limited effectiveness of risk-communication efforts can be attributed to the lack of trust. If you trust the risk manager, communication is relatively easy. If trust is lacking, no form or process of communication will be satisfactory (Fessenden-Raden et al., 1987).
How Trust Is Created and Destroyed
One of the most fundamental qualities of trust has been known for ages. Trust is fragile. It is typically created rather slowly, but it can be destroyed in an instant-by a single mishap or mistake. Thus, once trust is lost, it may take a long time to rebuild it to its former state. In some instances, lost trust may never be regained. Abraham Lincoln understood this quality. In a letter to Alexander McClure, he observed: “If you forfeit the confidence of your fellow citizens, you can regain their respect and esteem” [italics added].
The fact that trust is easier to destroy than to create reflects certain fundamental mechanisms of human psychology called here “the asymmetry principle.” When it comes to winning trust, the playing field is not level. It is tilted toward distrust, for each of the following reasons:
- Negative (trust-destroying) events are more visible or noticeable than positive (trust-building) events. Negative events often take the form of specific, well-defined incidents such as accidents, lies, discoveries of errors, or other mismanagement. Positive events, while sometimes visible, more often are fuzzy or indistinct. For example, how many positive events are represented by the safe operation of a nuclear power plant for one day? Is this one event? Dozens of events? Hundreds? There is no precise answer. When events are invisible or poorly defined, they carry little or no weight in shaping our attitudes and opinions.
- When events are well-defined and do come to our attention, negative (trust-destroying) events carry much greater weight than positive events (Slovic, 1993).
- Adding fuel to the fire of asymmetry is yet another idiosyncrasy of human psychology-sources of bad (trust-destroying) news tend to be seen as more credible than sources of good news. The findings reported in Section 3.4 regarding “intuitive toxicology” illustrate this point. In general, confidence in the validity of animal studies is not particularly high. However, when told that a study has found that a chemical is carcinogenic in animals, members of the public express considerable confidence in the validity of this study for predicting health effects in humans.2
- 4. Another important psychological tendency is that distrust, once initiated, tends to reinforce and perpetuate distrust. Distrust tends to inhibit the kinds of personal contacts and experiences that are necessary to overcome distrust. By avoiding others whose motives or actions we distrust, we never get to see that these people are competent, well-meaning, and trustworthy.
“The System Destroys Trust”
Thus far we have been discussing the psychological tendencies that create and reinforce distrust in situations of risk. Appreciation of those psychological principles leads us toward a new perspective on risk perception, trust, and conflict. Conflicts and controversies surrounding risk management are not due to public irrationality or ignorance but, instead, can be seen as expected side effects of these psychological tendencies, interacting with a highly participatory democratic system of government and amplified by certain powerful technological and social changes in society. Technological change has given the electronic and print media the capability (effectively utilized) of informing us of news from all over the world-often right as it happens. Moreover, just as individuals give greater weight and attention to negative events, so do the news media. Much of what the media reports is bad (trust-destroying) news (Lichtenberg and MacLean, 1992).
A second important change, a social phenomenon, is the rise of powerful special interest groups, well-funded (by a fearful public) and sophisticated in using their own experts and the media to communicate their concerns and their distrust to the public to influence risk policy debates and decisions (Fenton, 1989). The social problem is compounded by the fact that we tend to manage our risks within an adversarial legal system that pits expert against expert, contradicting each other’s risk assessments and further destroying the public trust.
The young science of risk assessment is too fragile, too indirect, to prevail in such a hostile atmosphere. Scientific analysis of risks cannot allay our fears of low-probability catastrophes or delayed cancers unless we trust the system. In the absence of trust, science (and risk assessment) can only feed public concerns, by uncovering more bad news. A single study demonstrating an association between exposure to chemicals or radiation and some ad-verse health effect cannot easily be offset by numerous studies failing to find such an association. Thus, for example, the more studies that are conducted looking for effects of electric and magnetic fields or other difficult-to-evaluate hazards, the more likely it is that these studies will increase public concerns, even if the majority of these studies fail to find any association with ill health (MacGregor et al., 1994; Morgan et al., 1985). In short, because evidence for lack of risk often carries little weight, risk-assessment studies tend to increase perceived risk.
Resolving Risk Conflicts: where do we go from here?
The psychometric paradigm has been employed internationally. One such international study by Slovic et al. (2000) helps frame two different solutions to resolving risk conflicts. This study compared public views of nuclear power in the United States, where this technology is resisted, and France, where nuclear energy appears to be embraced (France obtains about 80% of its electricity from nuclear power). Researchers found, to their surprise, that concerns about the risks from nuclear power and nuclear waste were high in France and were at least as great there as in the U.S. Thus, perception of risk could not account for the different level of reliance on nuclear energy in the two countries. Further analysis of the survey data uncovered a number of differences that might be important in explaining the difference between France and the U.S. Specifically, the French:
- saw greater need for nuclear power and greater economic benefit from it;
- had greater trust in scientists, industry, and government officials who design, build, operate, and regulate nuclear power plants;
- were more likely to believe that decision-making authority should reside with the experts and government authorities, rather than with the people.
These findings point to some important differences between the workings of democracy in the U.S. and France and the effects of different “democratic models” on acceptance of risks. One such model relies primarily on technical solutions to resolving risk conflicts; the other looks to process-oriented solutions.
Technical Solutions to Risk Conflicts
There has been no shortage of high-level attention given to the risk conflicts described above. One prominent proposal by Justice Stephen Breyer (1993) attempts to break what he sees as a vicious circle of public perception, congressional overreaction, and conservative regulation that leads to obsessive and costly preoccupation with reducing negligible risks as well as to inconsistent standards among health and safety programs. Breyer sees public misperceptions of risk and low levels of mathematical understanding at the core of excessive regulatory response. His proposed solution is to create a small centralized administrative group charged with creating uniformity and rationality in highly technical areas of risk management. This group would be staffed by civil servants with experience in health and environmental agencies, Congress, and the Office of Management and Budget (OMB). A parallel is drawn between this group and the prestigious Conseil d’Etat in France.
Similar frustration with the costs of meeting public demands led the 104th Congress to introduce numerous bills designed to require all major new regulations to be justified by extensive risk assessments. Proponents of this legislation argued that such measures are necessary to ensure that regulations are based on “sound science” and effectively reduce significant risks at reasonable costs.
The language of this proposed legislation reflects the traditional narrow view of risk and risk assessment based “only on the best reasonably available scientific data and scientific understanding.” Agencies are further directed to develop a systematic program for external peer review using “expert bodies” or “other devices comprised of participants selected on the basis of their expertise relevant to the sciences involved” (United States, 1995, pp. 57-58). Public participation in this process is advocated, but no mechanisms for this are specified.
The proposals by Breyer and the 104th Congress are typical in their call for more and better technical analysis and expert oversight to rationalize risk management. There is no doubt that technical analysis is vital for making risk decisions better informed, more consistent, and more accountable. However, value conflicts and pervasive distrust in risk management cannot easily be reduced by technical analysis. Trying to address risk controversies primarily with more science is, in fact, likely to exacerbate conflict.
A major objective of this paper has been to demonstrate the complexity of risk and its assessment. To summarize the earlier discussions, danger is real, but risk is socially constructed. Risk assessment is inherently subjective and represents a blending of science and judgment with important psychological, social, cultural, and political factors. Finally, our social and democratic institutions, remarkable as they are in many respects, breed distrust in the risk arena.
Whoever controls the definition of risk controls the rational solution to the problem at hand. If you define risk one way, then one option will rise to the top as the most cost-effective or the safest or the best. If you define it another way, perhaps incorporating qualitative characteristics and other contextual factors, you will likely get a different ordering of your action solutions (Fischhoff et al., 1984). Defining risk is thus an exercise in power.
Scientific literacy and public education are important, but they are not central to risk controversies. The public is not irrational. The public is influenced by emotion and affect in a way that is both simple and sophisticated. So are scientists. The public is influenced by worldviews, ideologies, and values. So are scientists, particularly when they are working at the limits of their expertise.
The limitations of risk science, the importance and difficulty of maintaining trust, and the subjective and contextual nature of the risk game point to the need for a new approach-one that focuses on introducing more public participation into both risk assessment and risk decision making to make the decision process more democratic, improve the relevance and quality of technical analysis, and increase the legitimacy and public acceptance of the resulting decisions. Work by scholars and practitioners in Europe and North America has begun to lay the foundations for improved methods of public participation within deliberative decision processes that include negotiation, mediation, oversight committees, and other forms of public involvement (English, 1992; Kunreuther et al., 1993; National Research Council, 1996; Renn et al., 1991, 1995).
Recognizing interested and affected citizens as legitimate partners in the exercise of risk assessment is no short-term panacea for the problems of risk management. It won’t be easy and it isn’t guaranteed. But serious attention to participation and process issues may, in the long run, lead to more satisfying and successful ways to manage risk.
ALHAKAMI, A. S., SLOVIC, P. A psychological study of the inverse relationship between perceived risk and perceived benefit. Risk Analysis. New Jersey, v. 14, n. 6 p. 1085-1096, dec. 1994. [ Links ]
ALPER, J. The pipeline is leaking women all the way along. Science. Washington, DC, v. 260, n. 5106, 409-411, april 1993. [ Links ]
BARKE, R., JENKINS-SMITH, H., SLOVIC, P. Risk perceptions of men and women scientists. Social Science Quarterly, Oklahoma, v. 78, n. 1, p. 167-176, 1997. [ Links ]
BAUMER, T. L. Research on fear of crime in the United States. Victimology, v. 3, p. 254-264, 1978. [ Links ]
BREYER, S. Breaking the vicious circle: toward effective risk regulation. Cambridge, MA: Harvard University Press, 1993. [ Links ]
BURNS, W. et al. Social amplification of risk: an empirical study. Carson City, N.V.: Nevada Agency for Nuclear Projects Nuclear Waste Project Office, 1990. [ Links ]
COHEN, B. L. Criteria for technology acceptability. Risk Analysis, New Jersey, v. 5, n. 1, p. 1-3, mar. 1985. [ Links ]
COVELLO, V. T. et al.The analysis of actual versus perceived risks. New York: Plenum, 1983. [ Links ]
CROUCH, E. A. C.; WILSON, R.Risk/Benefit analysis . Cambridge, MA: Ballinger, 1982. [ Links ]
DUPONT, R. L. Nuclear phobia: phobic thinking about nuclear power. Washington, DC: The Media Institute, 1980. [ Links ]
ENGLISH, M. R. Siting low-level radioactive waste disposal facilities: the public policy dilemma. New York: Quorum, 1992. [ Links ]
Three experts at the National Safety Council (NSC) Annual Congress and Expo in Anaheim, Calif., examined how the human element – behaviors, actions and decisions – can affect risk and impact workplace safety.
The panel featured Brian Hughes, vice president of Apollo Associated Services LL, Stuart Alleman, master expert for Raytheon Space and Airborne and Steve Brown, corporate safety manager for Southern California Edison. Hughes opened the sessions by urging the panel and audience to define risk.
Brown said his company considers risk “the potential for something to go wrong or the potential for someone to sustain an injury.”
Alleman added, “If we put our people at risk, we’ll have problems.”
“Most definitions of risk look at the downside,” Hughes explained. But he added that in the finance world, risk means variability. “Risk brings greater returns, and it also brings greater losses.”
Workplace safety, however, involves more than finances and spreadsheets: the human element can affect risk, Hughes said.
“When people are involved, people are highly variable, and their actions are hard to predict,” he said.
Actions vs. Conditions
The components of risk include systematic risk and unsystematic risk. Systematic risk, as Hughes described it, is risk inherent in the market – it cannot be diversified away. Unsystematic risk, meanwhile, is the risk of any individual in the market. Total risk is a combination of these two types.
“Decisions involve risk,” Hughes said. “Effect is caused by both action and condition.”
The problem in deciphering the human element of risk may revolve around the fact that most companies tend to focus on the action rather than the condition.
“We don’t know to look for conditions. People are generally focused on action causes,” Hughes said.
Alleman agreed, explaining that he noticed a pattern when analyzing 14 similar workplace incidents.
“Each individual organization handled [the situation] in certain way” he said. “They all attacked actions – retrain people, put more reviews in place to solve problem and lower the threshold – [but] didn’t fix any of them.”
When Alleman and his team examined the incidents together instead of separately, they noticed some common causal effects, including behavior, ownership of problems and how those problems were dealt with when they happened.
“When we looked at them together, we saw the systematic problems,” Alleman explained.
The panel also discussed how company management could affect how human risk elements impact workplace safety.
“In the past, we looked at the floor risk level,” Brown said. “We can lower that floor, but one of the problems we found is getting the management involved in lowering that floor and taking risk away.”
For example, Brown said in one case, employees were expected to wear safety glasses when working in a particular area. When management was around, workers would wear the protection. But with no one around to watch them, many did not wear the glasses.
“Once we got them involved and helped them realize what’s in it for them, it really turned the corner for us, getting employees involved,” Brown said. “The best way we got them involved was to encourage the safety team environment and then having management in there, giving them training for oversight and to be quiet and listen and address what they’re coming up with.”
The panel members agreed that putting additional pressure – whether it’s schedule, cost, quality or another type of pressure – on employees can increase risk in the workplace. By combining the analyses from the past year to look for systemic causes across the board, Alleman said pressure was clearly a factor.
“When they put that pressure on people, instances happen,” Alleman said.
When management was presented with the information that this pressure helped drive increased risk and contributed to incidents in the workplace, Alleman said management first tried to push the issue away. But when yet another incident happened, management finally realized pressure indeed might be a factor.
“They need to step back and see how they’re affecting the bottom-level people having accidents,” Alleman said of management.
“As things get more competitive, you have to increase complexity of the products you’re taking on,” Hughes explained. “I think people tend to undervalue the risks involved in that kind of change. That’s going to create a floor-level risk around schedule pressure that everyone will see, not just safety: quality, delivery, customer satisfaction, etc.”
When management listens to employees, the human element affecting risk may decrease, the panel explained.
“A lot of times, we blame people for what happens,” Alleman pointed out. “The secret we found is if you can get the top level to listen to the bottom level, the listening will surface issues quickly and resolve themselves without you having to get involved.”
Companies are all about people and a company’s success will depend on its people. Yet they are also a company’s biggest risk. Getting a measure of people and their potential shortcomings presents one of the biggest challenges to companies. Ironically, though, a company needs intelligent, experienced and ethical people to manage every other type of corporate risk.
Research done talent measurement company SHL, one in eight managers (mostly middle managers) and professionals is a high risk to his or her company mainly through poor decision-making and communications.
People risk defies precise quantification but it would seem that individual behaviour is inextricably linked to a company’s culture. Managing people (the HR component) and leading people (the CEO/board of directors) are very real risks and not the soft issues – as once thought. Efforts to mitigate HR risk, therefore, should not be ignored.
The following examples prove just how people can affect an organisation negatively, some with grave consequences while others have unnecessary consequences.
A compliance department undertook a special review of one of the daily regulatory reports to check whether the company was complying with all the relevant regulatory requirements. The review revealed definite areas of concern and there were other breaches of the regulatory requirements. The department drafted a document of the findings, which turned out to be the easy part. The difficult part was deciding what to do with this report.
Prior to this incident, there were other instances where compliance concerns regarding other issues related to the same specific director were taken to their boss. Meetings were promised with the department director but never materialised. The reports themselves were eventually ‘forgotten’ and the director in question was regarded as ‘untouchable’.
This time, however, the compliance officers considered these breaches urgent and serious. They decided to escalate the findings to the boss as usual but also to copy in other senior internal people as well as the firm’s directors. An urgent board meeting was held. Nobody supported the compliance officers or their report. A stressful and conflicting time followed but a lucky break occurred. A whistle-blower used the hotline to report other concerns regarding the particular director and her department. Retribution was not sweet, however, as the director resigned before the end of the disciplinary hearing, so escaping both public censure and any kind of real punishment. The director was free to move on to any other company after resigning, rendering potential employers vulnerable to an undesirable employee profile!
The questions one ponders over in this example are associated with people risk rather than the regulatory risks identified:
- Why did no one express concern about the findings in the report?
- Why was it not acknowledged that the compliance function was doing its job?
- Why were the board of directors and the department director concerned allowed to get away with such behaviour towards the compliance department?
- Why did no-one in senior management question why the department director’s reactions were so extreme?
- Why, with the numerous different charges, did senior management not question the morals and principles of the director and ensure that some punishment or action was meted out to the director?
- What does this say about the moral compass of the other directors and bode for the company and future employers?
Risk management initiatives must include people risk
Consider the following example:
- A very trusted driver – who had been working for the company for some fifteen years – was well respected until one weekend, he unintentionally pressed the car-tracking alarm button on the key ring of the company car.
- The tracking company phoned the chief operating officer and it was revealed that the car was in another province over the weekend, obviously taken without permission. The driver had been using the company car for private use.
- An inspection of the delivery book indicated many long trips to clients and regulators that were never commissioned over a few years.
- To add insult to injury, it was later found out that the speedometer was not working in any event. But whose fault was it?
- The delivery book and the driver were not supervised or monitored. It could be argued that, had the proper risk-control measures been put in place, the driver might still have his job, financial loss would have been avoided, time would not have been lost through investigations and interrogations, disciplinary hearings and all the bureaucracy that involves would also have been avoided.
Sometimes, too, management just does not really want to deal with the human element of risk.
One strange but true example is of an employee who fell pregnant with her second child within two months after the birth of her first child:
- It was not planned and she was devastated, thinking she would lose her job.
- Of course, the policy on maternity leave was available on the intranet but not read.
- She successfully explained to colleagues and her management that her expanding stomach was a medical problem and not a baby – despite the disbelief.
- Even more strangely, neither management – nor the staff member – ever consulted the policy or HR in this regard to seek guidance or assurance. The HR manager avoided the issue.
Risk management initiatives are about managing risk holistically – referred to as enterprise-wide risk management. Risk falls heavily within the HR space and includes understanding and assessing the interactions and interdependencies between various departments and stakeholders.
|Dawn Pretorius has, for some 12 years, run her own agency focusing on consulting, business strategy, training and development. A specific area of expertise for her includes risk management, compliance and corporate governance consulting. Dawn is a professional member of the Compliance Institute of South Africa, and her practice is accredited with the Financial Services Board. She has just published Beyond play: a down-to-earth approach to governance, risk and compliance.
Among many other qualifications, Dawn has a M.Com, B.Tech Banking, FIB(SA), MAP (Wits Business School). Her career has concentrated on many facets in the banking industry, such as financial and estate planning; private and offshore banking; company structures; credit, risk, compliance and corporate governance; marketing and communication and management training; and development in both technical and soft skills.