Tag Archives: cognitive risk framework
… Plus 6 Steps to Enhanced Assurance
The audit profession is facing unprecedented demands, but there are a host of tools available to help. James Bone outlines the benefits to automating audit tasks.
Internal audit is under increasing pressure across many quarters from challenges to audit objectivity, ethical behavior and requests to reduce or modify audit findings. “More than half of North American Chief Audit Executives (CAEs) said they had been directed to omit or modify an important audit finding at least once, and 49 percent said they had been directed not to perform audit work in high-risk areas.” That’s according to a report by The Institute of Internal Auditors (IIA) Research Foundation, based on a survey of 494 CAEs and some follow-up interviews.
Challenges to audit findings are a normal part of the process for clarifying risks associated with weakness in internal controls and gaps that expose the organization to threats. However, the opportunity to reduce subjectivity and improve audit consistency is critical to minimizing second guessing and enhanced credibility. One of the ways to improve audit consistency and objectivity is to reframe the business case for audit automation.
Audit automation provides audit professionals with the tools to reduce focus on low-risk, high-frequency areas of risk. Automation provides a means for detecting changes in low-risk, high-frequency areas of risk to monitor the velocity of high-frequency risks that may lead to increased exposures or development of new risks.
More importantly, the challenges to audit findings associated with low-frequency, high-impact risks (less common) typically deals with an area of uncertainty that is harder to justify without objective data. Uncertainty or “unknown unknowns” are the hardest risks to justify using the subjective point-in-time audit methodology. Uncertainty, by definition, requires statistical and predictive methods that provide auditors with an understanding of the distribution of probabilities, as well as the correlations and degrees of confidence associated with risk. Uncertainty or probability management provides auditors with next-level capabilities to discuss risks that are elusive to nail down. Automation provides internal auditors with the tools to shape the discussion about uncertainty more clearly and to understand the context for when these events become more prevalent.
Risk communications is one of the biggest challenges for all oversight professionals.According to an article in Harvard Business Review,
“We tend to be overconfident about the accuracy of our forecasts and risk assessments and far too narrow in our assessment of the range of outcomes that may occur. Organizational biases also inhibit our ability to discuss risk and failure. In particular, teams facing uncertain conditions often engage in groupthink: Once a course of action has gathered support within a group, those not yet on board tend to suppress their objections — however valid — and fall in line.”
Everyone in the organization has a slightly different perception of risk that is influenced by heuristics developed over a lifetime of experience. Heuristics are mental shortcuts individuals use to make decisions. Most of the time, our heuristics work just fine with the familiar problems we face. Unfortunately, we do not recognize when our biases mislead us in judging more complex risks. In some cases, what appears to be lapses in ethical behavior may simply be normal human bias, which may lead to different perceptions of risk. How does internal audit overcome these challenges?
The Opportunity Cost of Not Automating
Technology is not a solution, in and of itself; it is an enabler of staff to become more effective when integrated strategically to complement staff strengths and enhance areas of opportunity to improve. Automation creates situational awareness of risks. Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness (SA) in audit is not a one-size-fits-all proposition. In some organizations, SA involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real time. Situational awareness reduces human error by making sense of the environment with objective data.
Research is growing demonstrating that human error is the biggest cause of risk in a wide range of organizations, from IT security to health care and organizational performance. The opportunity to reduce human error and to improve insights into operational performance is now possible with automation. Chief Audit Officers have the opportunity to lead in collaboration with operations, finance, compliance and risk management on automation that supports each of the key stakeholders who provide assurance.
Collaboration on automation reduces redundancies for data requests, risk assessments, compliance reviews and demands on IT departments. Smart automation integrates oversight into operations, reduces human error, improves internal controls and creates situational awareness where risks need to be managed. These are the opportunity costs of not automating.
A Pathway to Enhanced Assurance
Audit automation has become a diverse set of solutions offered by a range of providers but that point alone should not drive the decision to automate. Developing a coherent strategy for automation is the key first step. Whether you are a Chief Audit Officer starting to consider automation or you and your team are well-versed in automation platforms, it may be a good time to rethink audit automation, not as a one-off budget item, but as a strategic imperative to be integrated into operations focused on the things that the board and senior executives think are important. This will require the organization to see audit as integral to operational excellence and business intelligence. Reframing the role of audit through automation is the first step toward enhanced assurance.
Auditors are taught to be skeptical while conducting attestation engagements; however, there is no statistical definition for assurance. Assurance requires the use of subjective judgments in the risk assessment process that may lead to variability in the quality of audits between different people within the same audit function. According to ISACA’s IS Audit and Assurance Guideline 2202 Risk Assessment in Planning, Risk Assessment Methodology 2.2.4, “all risk assessment methodologies rely on subjective judgments at some point in the process (e.g., for assigning weights to the various parameters). Professionals should identify the subjective decisions required to use a particular methodology and consider whether these judgments can be made and validated to an appropriate level of accuracy.” Too often these judgments are difficult to validate with a repeatable level of accuracy without quantifiable data and methodology.
Scientific methods are the only proven way to develop degrees of confidence in risk assessment and correlations between cause and effect. “In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.” The only way to adequately reduce the risk of sampling error is to automate sampling data. Trending sample data helps auditors detect seasonality and other factors that occur as a result of the ebb and flow of business dynamics.
A Pathway to Enhanced Assurance
- Identify the greatest opportunities to automate routine audit processes.
- Prioritize automation projects each budget cycle in coordination with operations, risk management, IT and compliance as applicable.
- Prioritize projects that leverage data sources that optimize automation projects across multiple stakeholders (operational data used by multiple stakeholders). One-offs can be integrated over time as needed.
- Develop a secondary list of automation projects that allow for monitoring, business intelligence and confidentiality.
- Design automation projects with levels of security that maintain the integrity of the data based on users and sensitivity of the data.
- Consider the questions most important to senior executives.
“Look, I have got a rule, General Powell ‘As an intelligence officer, your responsibility is to tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think. But you [should] always keep those three separated.”
– Tim Weiner reporting in the New York Times about wisdom former Director of National Intelligence Mike McConnell learned from General Colin Powell
The business case for audit automation has never been stronger given the demands on internal audit. Today, the tools are available to reduce waste, improve assurance, validate audit findings and provide for enhanced audit judgment on the risks that really matter to management and audit professionals.
When we think of hacking we think of a network being hacked remotely by a computer nerd sitting in a bedroom using code she’s written to steal personal data, money or just to see if it is possible. The idea of a character breaking network security to take control of law enforcement systems has been imprinted in our psyche from images portrayed in TV crime shows however the real story is much more complex and simple in execution.
The idea behind a cognitive hack is simple. Cognitive hack refers to the use of a computer or information system [social media, etc.] to launch a different kind of attack. The sole intent of a cognitive attack relies on its effectiveness to “change human users’ perceptions and corresponding behaviors in order to be successful.” Robert Mueller’s indictment of 13 Russian operatives is an example of a cognitive hack taken to the extreme but demonstrates the effectiveness and subtleties of an attack of this nature.
Mueller’s indictment of an elaborately organized and surprisingly low-cost “troll farm” set up to launch an “information warfare” operation to impact U.S. political elections from Russian soil using social medial platforms is extraordinary and dangerous. The danger of these attacks is only now becoming clear but it is also important to understand the simplicity of a cognitive hack. To be clear, the Russian attack is extraordinary in scope, purpose and effectiveness however these attacks happen every day for much more mundane purposes.
Most of us think of these attacks as email phishing campaigns designed to lure you to click on an unsuspecting link to gain access to your data. Russia’s attack is simply a more elaborate and audacious version to influence what we think, how we vote and foment dissent between political parties and the citizenry of a country. That is what makes Mueller’s detailed indictment even more shocking. Consider for example how TV commercials, advertisers and, yes politicians, have been very effective at using “sound bites” to simplify their product story to appeal to certain target markets. The art of persuasion is a simple way to explain a cognitive hack which is an attack that is focused on the subconscious.
It is instructive to look at the Russian attack rationally from its [Russia’s] perspective in order to objectively consider how this threat can be deployed on a global scale. Instead of spending billions of dollars in a military arms race, countries are becoming armed with the ability to influence the citizens of a country for a few million dollars simply through information warfare. A new more advanced cadre of computer scientists are being groomed to defend and build security for and against these sophisticated attacks. This is simply an old trick disguised in 21st century technology through the use of the internet.
A new playbook has been refined to hack political campaigns and used effectively around the world as documented in an article March, 2016. For more than 10 years, elections in Latin America have become a testing ground for how to hack an election. The drama in the U.S. reads like one episode of a long running soap opera complete with “hackers for hire”, “middle-men”, political conspiracy and sovereign country interference.
“Only amateurs attack machines; professionals target people.”
Now that we know the rules have changed what can be done about this form of cyber-attack? Academics, government researchers and law enforcement have studied this problem for decades but the general public is largely unaware of how pervasive the risk is and the threat it imposes on our society and the next generation of internet users.
I wrote a book, Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind to chronicle this risk and proposed a cognitive risk framework to bring awareness to the problem. Much more is needed to raise awareness by every organization, government official and risk professionals around the world. A new cognitive risk framework is needed to better understand these threats, identify and assess new variants of the attack and develop contingencies rapidly.
Social media has unwittingly become a platform of choice for nation state hackers who can easily hide the identify of organizations and resources involved in these attacks. Social media platforms are largely unregulated and therefore are not required to verify the identity and source of funding to set up and operate these kinds of operations. This may change given the stakes involved.
Just as banks and other financial services firms are required to identify new account owners and their source of funding technology providers of social media sites may also be used as a venue for raising and laundering illicit funds to carry out fraud or attacks on a sovereign state. We now have explicit evidence of the threat this poses to emerging and mature democracies alike.
Regulation is not enough to address an attack this complex and existing training programs have proven to be ineffective. Traditional risk frameworks and security measures are not designed to deal with attacks of this nature. Fortunately, a handful of information security professionals are now considering how to implement new approaches to mitigate the risk of cognitive hacks. The National Institute of Standards and Technology (NIST), is also working on an expansive new training program for information security specialists specifically designed to understand the human element of security yet the public is largely on its own. The knowledge gap is huge and the general public needs more than an easy to remember slogan.
A national debate is needed between industry leaders to tackle security. Silicon Valley and the tech industry, writ large, must also step up and play a leadership role in combatting these attacks by forming self-regulatory consortiums to deal with the diversity and proliferation of cyber threats through vulnerabilities in new technology launches and the development of more secure networking systems. The cost of cyber risk is far exceeding the rate of inflation and will eventually become a drag on corporate earnings and national growth rates as well. Businesses must look beyond the “insider threat” model of security risk and reconsider how the work environment contributes to risk exposure to cyberattacks.
Cognitive risks require a new mental model for understanding “trust” on the internet. Organizations must begin to develop new trust measures for doing business over the internet and with business partners. The idea of security must also be expanded to include more advanced risk assessment methodologies along with a redesign of the human-computer interaction to mitigate cognitive hacks.
Cognitive hacks are asymmetric in nature meaning that the downside of these attacks can significantly outweigh the benefits of risk-taking if not addressed in a timely manner. Because of the asymmetric nature of a cognitive hack attackers seek the easiest route to gain access. Email is one example of a low cost and very effective attack vector which seeks to leverage the digital footprint we leave on the internet.
Imagine a sandy beach where you leave footprints as you walk but instead of the tide erasing your footprints they remain forever present with bits of data about you all along the way. Web accounts, free Wi-Fi networks, mobile phone apps, shopping websites, etc. create a digital profile that may be more public than you realize. Now consider how your employee’s behavior on the internet during work connects back to this digital footprint and you are starting to get an idea of how simple it is for hackers to breach a network.
A cognitive risk framework begins with an assessment of Risk Perceptions related to cyber risks at different levels of the firm. The risk perceptions assessment creates a Cognitive Mapof the organization’s cyber awareness. This is called Cognitive Governance and is the first of five pillars to manage asymmetric risks. The other five pillars are driven from the findings in the cognitive map.
A cognitive map uncovers the blind spots we all experience when a situation at work or on the internet exceeds our experience with how to deal with it successfully. Natural blind spots are used by hackers to deceive us into changing one’s behavior to click a link, a video, a promotional ad or even what we read. Trust, deception and blind spots are just a few of the tools we must incorporate into a new toolkit called the cognitive risk framework.
There is little doubt that Mueller’s investigation into the sources and methods used by the Russians to influence the 2016 election will reveal more surprises but one thing is no longer in doubt…the Russians have a new cognitive weapon that is deniable but still traceable, for now. They are learning from Mueller’s findings and will get better.
“In 1981, Carl Landwehr observed that “Without a precise definition of what security means and how a computer can behave, it is meaningless to ask whether a particular computer system is secure.”[i]
Researchers George Cybenko, Annarita Giani, and Paul Thompson of Dartmouth College introduced the term “Cognitive Hack” in 2002 in an article entitled, “Cognitive Hacking, a Battle for the Mind”. “The manipulation of perception —or cognitive hacking—is outside the domain of classical computer security, which focuses on the technology and network infrastructure.”[i] This is why existing security practice is no longer effective at detecting, preventing or correcting security risks, like cyber attacks.
Almost 40 years after Landwehr’s warning cognitive hacks have become the most common tactic used by more sophisticated hackers or advanced persistent threats. Cognitive hacks are the least understood and operate below human conscious awareness allowing these attacks to occur in plain sight. To understand the simplicity of these attacks one need look no further than the evening news. The Russian attack on the Presidential election is the best and most obvious example of how effective these attacks are. In fact, there is plenty of evidence that these attacks were refined in elections of emerging countries over many years.
A March 16, 2016 article in Bloomberg, “How to Hack an Election” chronicled how these tactics were used in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala, and Venezuela long before they were used in the American elections.
“Cognitive hacking [Cybenko, Giani, Thompson, 2002] can be either covert, which includes the subtle manipulation of perceptions and the blatant use of misleading information, or overt, which includes defacing or spoofing legitimate norms of communication to influence the user.” The reports of an army of autonomous bots creating “fake news” or, at best, misleading information in social media and popular political websites is a classic signature of a cognitive hack.
Cognitive hacks are deceptive and highly effective because of a basic human bias to believe in those things that confirm our own long held beliefs or beliefs held by peer groups whether social, political or collegial. Our perception is “weaponized” without our knowledge or full understanding we are being manipulated. Cognitive hacks are most effective in a networked environment where “fake news” can be picked up in social media sites as trending news or “viral” campaigns encouraging even more readers to be influenced by the attacks without any sign an attack has been orchestrated. In many cases, the viral nature of the news is a manipulation through the use of an army of autonomous bots on various social media sites.
At its core the manipulation of behavior has been in use for years in the form of marketing, advertisements, political campaigns and in times of war. In the Great World Wars, patriotic movies were produced to keep public spirits up or influence the induction of volunteers to join the military to fight. ISIS has been extremely effective using cognitive hacks to lure an army of volunteers to their Jihad even in the face of the perils of war. We are more susceptible than we believe which creates our vulnerability to cyber risks and allows the risk to grow unabated in the face of huge investments in security. Our lack of awareness to these threats and the subtlety of the approach make cognitive hacks the most troubling in security.
I wrote the book, “Cognitive Hack, The New Battleground in Cybersecurity.. the Human Mind”, to raise awareness of these threats. Security professionals must better understand how these attacks work and the new vulnerabilities they create to employees, business partners and organizations alike. But more importantly, these threats are growing in sophistication and vary significantly requiring security professionals to rethink assurance in their existing defensive posture.
The sensitivity of the current investigation into political hacks by the House and Senate Intelligence Committees may prevent a full disclosure of the methods and approaches used however recent news accounts leave little doubt to their effect as described more than 14 years ago by researchers and more recently in Paris and Central and South American elections. New security approaches will require a much better understanding of human behavior and collaboration from all stakeholders to minimize the impact of cognitive hacks.
I proposed a simple set of approaches in my book however security professionals must begin to educate themselves of this new, more pervasive threat and go beyond simple technology solutions to defend their organization against them. If you are interested in receiving research or other materials about this risk or approaches to address them please feel free to reach out.
[i] C.E. Landwehr, “Formal Models of Computer Security,” Computing Survey, vol. 13, no. 3, 1981, pp. 247-278.
It is the dead of winter in a lovely little village along the coastline of southern Maine and a sudden Nor’easter pounds New England. To escape the cold and quench their thirst three solitary figures decide to seek refuge in the only Irish pub open that night. Each of these figures has arrived, serendipitously, within 15 minutes of one another and are beginning to warm themselves near the fireplace next to the bar.
As they settle in all three decide to share a pint or two and order food before they depart along their separate journeys. Not surprisingly, one pint leads to another and before long the conversation has traversed solving world events and inevitably leads to their work and avocation.
The first figure pipes up, ”I am a mechanic! I have seven professional certifications and have been taught by master mechanics from around the world.” The second figure interjects, that’s really interesting, “I am an artist! I interpret the complex and make it simple for my audience to understand.” Without hesitation the third figure interrupts and exclaims, “I am a scientist! I research and explore the unknown.”
After several more pints of beer the conversation has grown even more verbose and an argument ensues. The artist asks the mechanic what types of mechanical repairs does she solve and the mechanic responds, “I am a risk mechanic!” I have been certified in all varieties of risks, policies and procedures, and frameworks and speak regularly on the topic around the world, says the mechanic.
At this the scientist asks the artist, “what does it mean that you interpret the complex and make it simple for your audience?” The artist says, “I study how people make decisions and help them manage risks by redesigning their work to solve complex problems!” The mechanic then elbows the artist and asks the scientist, well, what do you study? The scientist proudly explains that she is a researcher of complex risk phenomenon. I have eight patents on this topic.
As the storm outside subsides, the bartender, having overheard the arguments, has decided his three patrons have had enough to drink for one night. The bartender proposes a bet and asks the three to solve a complex risk problem with the winner’s tab paid.
Solve this riddle asks the bartender, “What does a rich man crave but can never buy? We chase it but can never find it. What makes fools of us all?”
Do you know the answer?
Musings of a Cognitive Risk Manager
In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must reimagine risk management for the 21st century. I purposely did not get into the details right away because it is really important to understand WHY some “Thing” must change before change can really happen. In fact, change is almost impossible without understanding why.
Why put on sunscreen if you didn’t know that skin cancer is caused by too much exposure to ultraviolent rays from the sun? We know that drinking and driving is one of the deadly causes of highway fatalities BUT we still do it! Knowing the risk of some “Thing” doesn’t prevent us from taking the chance anyway. This is why diets are so hard to maintain or habits are so hard to change. We humans do irrational things for reasons that we don’t fully understand. That is precisely WHY we need Cognitive Risk Governance.
Cognitive risk governance is the “Designer” of human-centered risk management! The sunscreen is effective (if you use it properly!) because the formulation of the ingredients were designed to protect our skin from ultraviolent rays. Diets are designed to help us lose weight. Therefore, cognitive risk governance must also design the outcomes that we seek!
This is radically different from any other risk framework. If you take the time to study any framework, 99% of the guidance is focused on the details of the activity you must do first. Do risk assessments, develop internal controls, and create policies and procedures, blah blah blah …. The details are important but what if your focus is on the wrong stuff, which too often is the case! If you have ever heard the term, “Shoot first, then Aim” then you now fully understand why most risk frameworks don’t work.
The fallacy of action is the root cause of failure in risk management programs.
It is really important to understand this concept so let me provide an illustration. If you want to create a car with fuel efficiency you must first design the car to get more mileage with the same amount of fuel.
In order to achieve better efficiency you must understand why cars are not fuel efficient. In order to fully understand why cars are not fuel efficient manufacturers must reimagine the car.
However, before you start changing the car you must decide how efficient you want the car to become.
Design starts with imaging the end state then determining what steps to take to achieve the goal. This is how cognitive risk governance works in human-centered risk management.
The role of cognitive risk governance is to design new ways to reduce risks across the organization. In order to reduce risks we must understand why certain risks exist and determine the right reduction in risk we want to achieve. This is why cognitive risk governance is a radical departure from traditional risk management.
In contrast, traditional risk management advocates for a Top Ten list of risks or a Risk Repository that inventories events. Unfortunately, the goal seems to be focused on monitoring risks as opposed to risk reduction. Risks cannot be completely eliminated therefore any “activity-focused” risk program will always find new risks to add to the list. A human-centered risk management program is focused on reducing risks to acceptable levels through design. But not all risks! The focus is on complex risks!
Cognitive risk governance is the process of designing human-centered risk management to address the most complex risks. Any distribution of risk data will tell you that 75-80% of risks are high frequency – low impact risks yet traditional risk programs focus 90% of its energy dealing with the least important risks. The opportunity presented by a cogrisk governance model is to separate risks into appropriate levels of importance. Risks represent a range (distribution) of outcomes therefore one-dimensional approaches to address risks will inevitably not address the full range of complex risks.
Developing a Cognitive Risk Governance Tool Kit
The toolkit for designing cognitive risk governance involves an understanding of a few concepts that any organization can implement.
Cognitive risk governance starts with a clear understanding of the difference in “Uncertainty” and “Risks”. Uncertainty is simply what you do not know or don’t have clear insight into understanding the impacts of its occurrence. Risks are known but it doesn’t mean you fully understand the nature of these risks. I do not subscribe to the semantic exercise of Known-Known, Known-Unknowns, and Unknown-Unknowns. There is no rigor in this exercise nor does in provide new insights into solving problems of importance.
The next concept in a cogrisk governance program involves developing risk intelligence and active defense. Risk intelligence is quantitative and qualitative data from which analysts are better able to develop insights into complex risks. The processes of data management, data analysis, and the formulation of risk intelligence may require a multiple disciplinary team of experts depending on the complexity of the organization and its risk profile.
Active defense, on the other hand, is the process of implementing targeted solutions driven by risk intelligence to capture new opportunities and reduce risk exposures that impede growth. Risk Intelligence and active defense will require solutions and new tools that may not be in use in traditional risk programs. Organizations are generating petabytes of data that are seldom leveraged strategically to manage risk. A cogrisk governance program is responsible for designing risk intelligence and active defense in ways that leverage these stores of data as well as external sources of intelligence.
In traditional risk, the “Three Lines of Defense” model is a common approach used to defend the organization, yet to understand why some change is needed one need only look at how the military is re-engineering its workforce to a 21st century model to address the new battleground being fought with technology and cognition. It is no longer a reasonable assumption to expect an army of people with limited tools to be able to analyze the movement of petabytes of data into, across and outside of an organization with confidence.
The transformation in the military is being led by the Joint Chiefs of Command which is a corollary for Risk, Compliance, Audit, and IT professionals. Risk professionals must lead the change from 19th century risk practice to 21st century human-centered risk management. Existing risk frameworks such as COSO, ISO and Basel have laid a good foundation from which to build but more needs to be done.
I will address these opportunities in more detail in subsequent articles but for now let’s move to the next concept in a CogRisk governance model. The intersection of human-machine interactions has been identified as a critical vulnerability in cyber security. However, poorly designed workstations that require employees to cobble together disparate data and systems to complete work tasks represent inefficiencies that create unanticipated risks in the form of human error.
The intersection of the human-machine interaction represents two significant opportunities in a human-centered risk management program. The first opportunity is an improvement in cybersecurity vulnerability and the second is the capture of more efficient processes in productivity gains and reductions in high frequency, low impact risks. I will defer a discussion on the opportunity to improve cybersecurity to subsequent articles because of the scope of the discussion. However, I do want to mention that a focus on reducing human error risks is unappreciated.
The equation is a simple one but very few organizations ever take the time to calculate the cost of inefficiency even in firms with advanced Six Sigma programs. Here is an oversimplified model: Human error (75%) + Uncontrollable risks (25%) = operational inefficiency (100%). From here it is easy to see the benefit of human-centered risk management. This is obviously a simplified model, including the statistical data, but not one far from reality if you look at empirical cross-industry analysis.
Human-centered risk management focuses on redesigning the causes of human error providing real payback in efficiency and business objectives. A risk program designed to facilitate safe and efficient interactions with technology improves risk management and helps grow business. More on that topic later!
In the next article, I will discuss Intentional Control design and practical use cases for machine learning and artificial intelligence in risk management.
As I have done in previous articles, I invite others to become active participants in helping design a human-centered risk management program and contribute to this effort. If you are a risk professional, auditor, compliance officer, technology vendor or simply an interested party, I hope that you see the benefit of these writings and contribute if you have real-life examples.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
Musings of a Cognitive Risk Manager
Before beginning a discussion on human-centered risk it is important to provide context for why we must consider new ways of thinking about risk. The context is important because the change impacting risk management has happened so rapidly we have hardly noticed. If you are under the age of 25 you take for granted the Internet, as we know it today, and the ubiquitous utility of the World Wide Web. Dial-up modems were the norm and desktop computers with “Windows” were rare except in large companies. Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.
What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks. Is it possible that risks and the processes for measuring risk should remain static? Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future? Why are qualitative self-assessments still a common approach for measuring disparate risks? More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?
The constant is human behavior!
Technology has been successful at helping us get more done when and wherever we need to conduct business. The change brought on by innovation has nearly eliminated the separation of our work and personal lives, as a result, businesses and individuals are now exposed to new risks that are harder to understand and measure. The semi-state of hardened enterprise but soft middle has created a paradox in risk management. The paradox of Robust Yet Fragile. Organizations enjoy robust technological capability to network, partner and conduct business 24/7 yet we are more vulnerable or fragile to massive systemic risks. Why are we more fragile?
The Internet is the prototypical example of a complex system that is “scale-free” with a hub-like core structure that makes it robust to random loss of individual nodes yet fragile to targeted attacks on highly connected nodes or hubs. Likewise, large and small corporations are beginning to look more like diverse forms of complex systems with increased dependency on the Internet as a service model and a distributed network of vendors who provide a variety of services no longer deemed critical or cost effective to perform in house.
Collectively, organizations have leveraged complex systems to respond to customer and stakeholder demands to create value, unwittingly, becoming more exposed to fragility at critical junctures. Systemic fragility has been tested during recent denial of service attacks (DDoS) on critical Internet service providers and recent ransomware attacks both which spread with alarming speed. What changed? After each event risk, professionals breathe a sigh of relief and continue pursuing the same strategies that leave organizations vulnerable to massive failure. The Great Recession of 2009 is yet another example of the fragility of complex systems and a tepid response to systemic risks. Do we mistakenly take survival as a sign of a cure to the symptoms of systemic illness?
After more than 20 years of explosive productivity growth the layering of networked systems now pose some of the greatest risks to future growth and security. Inexplicably, productivity has stalled because humans are becoming the bottleneck in infrastructure. Billions of dollars are currently rushing in to finance the next phase of Internet of Things that will extend our vulnerabilities to devices in our homes, our cars, and eventually more. Is it really possible to fully understand these risks with 19th century risk management?
The dawn of the digital economy has resulted in the democratization of content and the disintermediation of past business models in ways unimaginable 20 years ago. I will spare you the boring science behind the limits of human cognition but let’s just say that if you can’t remember what you had for dinner last Wednesday night you are not alone.
But is that enough reason to change your approach to risk management? Not surprisingly, the answer is Yes! Acknowledging that risk managers need better tools to measure more complex and emerging risks should no longer be considered a weakness. It also means that expecting employees to follow, without fail or assistance, the growing complexity of policies, procedures and IT controls required to deal with a myriad of risks may be unrealistic without better tools. 21st century risk management approaches are needed to respond to the new environment in which we now live.
Over the last 30 years, risk management programs have been built “in response” to risk failures in systems, processes and human error. Human-centered risk management starts with the human and redesigns internal controls to optimize the objectives of the organization while reducing risks. This may sound like a subtle difference but it is, in fact, a radically different approach but not a new one.
Human-factors engineers first met in 1955 in Southern California but [its] contributions to safety across diverse industries is now under-appreciated. We don’t give a second thought to the technology that protects us when we travel in our cars, trucks and airlines or undergo complex medical procedures. These advances in risk management did not happen by accident they were designed into the products and services we enjoy today!
Each of these industries recognized that human error posed the greatest risks to the objectives of their respective organizations. Instead of blaming humans however they sought ways to reduce the complexity that leads to human error and found innovative ways to grow their markets while reducing risks. Imagine designing internal controls that are as intuitive as using a cell phone allowing employees to focus on the job at hand instead of being distracted by multitasking! A human-centered risk program looks at the human-machine interaction to understand how the work environment contributes to risk.
I will return to this concept in subsequent papers to explain how the human-machine interaction contributes to risk. For now, let’s suffice it to say that there is sufficient research and empirical data to support the argument. To further explain a human-centered risk approach we must also understand how decision-making is impacted as a result of 19th century risk practices.
Situational awareness is a critical component of human-centered risk management. One’s perception of events and comprehension of their meaning, the projection of their status after events have changed or new data is introduced, and the ability to predict how change impacts outcomes and expectations with clarity facilitate situational awareness. The opportunity in risk management is to improve situational awareness across the enterprise. Enterprise risks are important but they are not all equal and should not be treated the same. Situational awareness helps senior executives understand the difference.
The challenge in most organizations is that situational awareness is assumed as a byproduct of experience and training and seldom revisited when the work environment changes to absorb new products, processes or technology. The failure to understand this vulnerability in risk perception happens at all levels of the organization from the boardroom down to front-line. The vast majority of change introduced in organizations tend to be minor in nature but accumulate over time contributing to a lack of transparency or Inattentional Blindness impacting situational awareness. This is one of the many reasons organizations are surprised by unanticipated events. We simply cannot see it coming!
Human-centered risk management focuses on designing situational awareness into the work environment from the boardroom down to the shop floor. This multidisciplinary approach requires a new set of tools and cognitive techniques to understand when imperfect information could lead to errors in judgment and decision-making. The principles and processes for designing situational awareness will be discussed in subsequent articles. The goal of human-centered risk management is to design scalable approaches to improve situational awareness across the enterprise.
Human-factors design and situational awareness meet at the “cross roads of technology and the liberal arts” to quote the visionary Steven Jobs. These two factors in human-centered risk management can be achieved by selecting targeted approaches. These approaches will be discussed in more detail in subsequent articles however I invite others to participate in this discussion if you too have an interest in reimagining new approaches to risk management.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
The COSO ERM framework is being revised with a new tagline, Enterprise Risk Management – Aligning Risk with Strategy and Performance. Dennis Chelsey, PwC’s Global Risk Consulting leader and lead partner for the COSO ERM effort recently stated, “Enterprise risk management has evolved significantly since 2004 and stands at the verge of providing significant value as organizations pursue value in a complex and uncertain environment.” Chelsey goes on to state that, “This update establishes the relationship between risk and strategy, positions risk in the context of an organization’s performance, and helps organizations anticipate so that can get ahead of risk and embrace a mindset of resilience.”
Additionally, the ISO 31000:2009 risk framework is being revised as well. “The revision of ISO 31000:2009, Risk management – Principles and guidelines, has moved one step further to Draft International Standard (DIS) stage where the draft is now available for public comment,” according to the International Organization of Standardization’s website. As explained by Jason Brown, Chair of ISO’s technical committee ISO/TC 262, Risk management, “The message our group would like to pass on to the reader of the [DIS], Draft International Standard, is to critically assess if the current draft can provide the guidance required while remaining relevant to all organizations in all countries. It is important to keep in mind that we are not drafting an American or European standard, a public or financial services standard, but much rather a generic International Standard.”
And finally, the Basel Committee on Banking Supervision, is rolling out, in phases, its final updated reform measures (Basel III) to ensure bank capital and liquidity measures provide resilience in financial markets to systemic risks. The magnitude and breadth of these changes may feel overwhelming depending on where you sit on the spectrum of change impacting your business.
Likewise, more complex and systemic risks such as cybersecurity, prompted the National Institute of Standards and Technology to revise and update its Cybersecurity Framework not to mention changes to Dodd Frank, Healthcare and a host of other regulatory mandates. So where does the value proposition happen in risk management? Given the increasing velocity of change in business and regulatory requirements how does a risk professional in compliance, audit, risk and/or IT security demonstrate an effective and repeatable value proposition while struggling to keep pace?
In order to begin we must first acknowledge that, like risk management, the term “value” has very different meanings for different stakeholders. A shareholder’s definition of value will most likely be different than a customer’s definition. Given this context, we can focus on the “value” proposition derived from the role of a risk professional’s contribution to each stakeholder. However, we need more information to fully understand how a risk professional might approach this topic. If you are an internal auditor you may take a risk-based approach during the audits you perform. If your role is that of a regulatory compliance professional ensuring the effectiveness of internal controls, ethics and awareness are used to derive value. The same is true for the contributions each oversight team makes. In studying other risk professionals, I have begun to learn that I need to expand my definition of value to incorporate disciplines beyond my own skill set.
Sean Lyons, author of “Corporate Defense and The Value Preservation Imperative,” focuses on key strategies to preserve value by expanding the Corporate Defense model from 3 to 5 Lines of Defense creating an enterprise-wide risk approach. Andrea Bonime-Blanc, author of “The Reputation Risk Handbook,” has developed a focus on the importance of understanding the difference in Reputation Management and the role of Reputation Risk. Dr. Bonime-Blanc makes a compelling argument for understanding the strategic importance of developing clear steps to manage key risks within a firm that pose the greatest potential of damage to a firm’s reputation by adopting an enterprise risk approach to reputation risks. In thinking about where my practice adds value, I have proposed a Cognitive Risk Framework for Cybersecurity and extended the model to include enterprise risk management. The basis for a cognitive risk framework is derived from decades of research in behavioral economics, cognitive/decision science, and a deep look at the human-machine interaction as a way to infuse human elements into risk management much the same as automobile manufacturers, NASA & aerospace industries have redesigned the interior of their respective vehicles to account for human behavior to make the travel experience safer.
What is exciting about these and many more new developments in the risk profession is that “value” can be derived by each of these approaches. In fact, while each practice may seem uniquely different the differences compliment because risk is not one dimensional. The complexity of the risk profile of many firms has changed and evolved in ways that require more than one view on how to manage the myriad of threats facing a firm. The permutations of risk exposure will only expand given the velocity of change in technology and the speed of computing power being acquired by and expected of our competitors, customers, and adversaries alike.
The challenge for organizations is to not assume that a one dimensional approach to risk management is sufficient for dealing with three dimensional risks with a great deal of uncertainty.
The value proposition of risk management viewed from this perspective suggests that a cross-disciplinary approach is needed. Even greater value can be created by risk management through thoughtful design, value preservation and sustainable practices and behaviors. By this standard, risk management informs and supports the strategic plan through the value it [risk management] creates for each of its respective stakeholders. The lesson is that organizations should not get stuck in one dogmatic approach to managing risks while assuming it is sufficient for today’s risk environment. What we learn from others is simply another way that value is created for the organization.
You must be logged in to view this document. Click here to login
TheGRCBlueBook combines risk advisory services with cutting edge research, a knowledge of the GRC marketplace and a platform for GRC solutions providers to educate and showcase their products and services to a global market for risk, audit, compliance and IT professionals seeking cost effective solutions to manage a variety of risks. Partner with TheGRCBlueBook to help educate corporate buyers about your GRC products and services.
Behavioral economics has only recently begun to garner gradual acceptance by mainstream economists as a rigorous discipline that may serve as an alternative perspective on decision-making. However, the broad acceptance and growing adoption of behavioral economic theories and concepts along with advancements in computational firepower present opportunities to put into practice practical applications for improving risk management practice. The goal of this article is to develop a contextual model of a cognitive risk framework for enterprise risk management that frames the limitations and possibilities for enhancing enterprise risk management by combining behavioral science with a more rigorous analytical approach to risk management. The thesis of this paper is that managers and staff are prone to natural limitations in Bayesian probability predictions as well as errors in judgment due in part of insufficient experience or data to draw reliably consistent conclusions with great confidence. In this context, a cognitive risk framework helps to recognize these limitations in judgment. The Cognitive Risk Framework for Cybersecurity and the Five Pillars of the framework have been offered as guides for developing an advanced enterprise risk framework to deal with complex and asymmetric risks such as cyber risks.
“A major task in organizing is to determine, first, where the knowledge is located that can provide the various kinds of factual premises that decisions require.” – Herbert Simon
In a 1998 critique of Amos Tversky’s contributions to behavioral economics (Laibson and Zeckhauser) discussed how Tversky systematically exposed the theoretical flaws in rationality by individual actors in the pursuit of perfect optimality. Tversky and Kahneman’s Judgment under Uncertainty: Heuristics and Biases (1974) and Prospect Theory (1979) demonstrated that actual decisions involve some error. “The rational choice advocates assume that to predict these errors is difficult or, in the more orthodox conception of rationality, impossible. Tversky’s work rejects this view of decision-making. Tversky and his collaborators show that economic rationality is systematically violated, and that decision-making errors are both widespread and predictable. This now incontestable point was established by two central bodies of work: Tversky and Kahneman’s papers on heuristics and biases, and their papers on framing and prospect theory.”
Much of Tversky and Kahneman’s contributions are less well known by the general public and misinterpreted as a purely theoretical treatment by some risk professionals. As researchers, Tversky and Kahneman were well versed in mathematics, which helped to shine light on systemic errors in complex probability judgments and the use of heuristics in inappropriate context. As groundbreaking as behavioral science has been in challenging economic theory, Tversky and Kahneman’s work centers on a narrow set of heuristics: representativeness, availability and anchoring as universal errors. The authors used these three foundational heuristics broadly to describe how decision-makers substitute mental shortcuts for probabilistic judgments resulting in biased inferences and a lack of rigor in making decisions under uncertainty.
Cognitive Risk Framework: Harnessing Advanced Technology for Decision Support
In the thirty years since Prospect Theory data analytics expertise and computational firepower have made significant progress in addressing the weakness in Bayesian probabilities recognized by Tversky and Kahneman. Additionally, the automotive industry and Apple Inc., among others, have been successful in incorporating behavioral science in product design to reduce risk, anticipate human error and improve the user experience adding value in financial results. This paper assumes that these early examples of progress point to untapped potential if applied in constructive ways. There are distractors, and even Tversky and Kahneman admitted to inherent weaknesses that are not easy to solve. For example, observers are skeptical that laboratory results may not replicate real-life situations; that arbitrary frames don’t reflect reality as well as a lack of mathematical predictive accuracy.
Since Laibson and Zeckhauser’s (1998) critique of Tversky’s contributions to economics a large body of research in cognition has evolved to include Big Data, Computational Neurosciences, Cognitive Informatics, Cognitive Security, Intelligent Informatics, and rapid early stage advancements in machine learning and artificial intelligence. A Cognitive Risk Framework is proposed to leverage the rapid advancement of these technologies in risk management however technology alone is not a panacea. Many of these technologies are evolving yet additional progress will continue in various stages requiring risk professionals to begin to consider how to formalize steps to incorporate these tools into an enterprise risk management program in combination with other human elements.
The Cognitive Risk Framework anticipates that as promising as these new technologies are they represent one pillar of a robust and comprehensive framework for managing increasingly complex threats, such as, cyber and enterprise risks. The Five Pillars include Intentional Controls Design, Intelligence and Active Defense, Cognitive Risk Governance, Cognitive Security Informatics, and Legal “Best Efforts” Considerations. A cognitive risk framework does not supplant other risk frameworks such as COSO ERM, ISO 31000 or NIST standards for managing a range of risks in the enterprise. A cognitive risk framework is presented to leverage the progress made in risk management and provide a pathway to demonstrably enhance enterprise risk using advanced analytics to inform decision-making in ways only now possible. At the core of the framework is an assumption about data.
One of the core tenets of Prospect Theory is the recognition of errors made in decision-making derived from small sample size or poor quality data. Tversky and Kahneman noted several observations where even very skilled researchers routinely made errors of inference derived from poor sampling techniques. Many recognize the importance of data however organizations must anticipate that a cross-disciplinary team of expertise is needed to actualize a cognitive risk framework. Data will become either the engine of a cognitive risk framework or its Achilles Heel and may be the most underestimated investment in ramping up a cognition driven risk program. A cognitive risk framework anticipates much more diverse skills than currently exists in risk management and IT security.
Data is but one of the considerations in developing a robust cognitive risk framework. Other considerations will include developing structure and processes that allow ease of adoption by practitioners across multiple industries and in different size organizations. While it is anticipated that a cognitive risk framework can be successfully implemented in large and small organizations risk professionals may decide to adopt a modified version of the Five Pillars or develop solutions to address specific risks such a cybersecurity as a standalone program. It is anticipated that if cognitive risk frameworks are adopted more broadly that technology firms and standards organizations would take an active role in developing complementary programs to leverage these frameworks to advance enterprise risk using advanced analytics and cognitive elements.
 LAIBSON/ZECKHAUSER Kluwer Journal @ats-ss8/data11/kluwer/journals/risk/v16n1art1 COMPOSED: 03/26/98 11:00 am. PG.POS. 2 SESSION: 15
“Never let the facts get in the way of a good argument”
Facts, or more precisely, our understanding of facts or the truth have become more transient in the information age or has it? The Internet has radically changed how we access information in ways that few appear to challenge or even understand. Today, anyone can Google a fact or story or news event about any topic imaginable to “learn” about a topic instantly with only a few keystrokes. We are bombarded today with opinion pieces, rumors, false news stories and innuendoes without bothering to check the validity of the stories. In fact, depending on the viewer of said data, the facts are easily dismissed when the “information” disagrees with one’s views or beliefs about the topic. So the question here is “has the information age inhibited critical thinking?” Risk managers are not immune to these same biases and the implications may help explain why risk management is at risk of failing.
It turns out that the definition of the “truth” does not answer the question of what a truth really is. Here are a few examples: Merriam-Webster states that truth is “sincerity in action, character, and utterance”. Or “the state of being the case: a fact. Or “the body of real things, events, and facts”. Or a transcendent fundamental or spiritual reality” Or “a judgment, proposition, or idea that is true or accepted as true. Or my favorite, “the body of true statements and propositions.” Dictionary.com has 10 different definitions each in contrast with Merriam-Webster. In other words, truth is what we believe it is. You know you are in trouble when truth and transcendental or spiritual reality are used in the same definition. Apparently, we have no idea what a truth is or we are simply more confused than ever as we get bombarded with different truths.
But why is this important for risk professionals? If the truth changes based on evolving norms, opinions, perception and biases how does a risk professional manage emerging risks in an environment where a variance from the old truths conflict with new truths? Operating models change as new leadership dictates his or her view on old operating models requiring risk professionals to question how does one assess these new risks? What was once indisputable no longer applies and old assumptions are considered impediments to progress. Or does it?
In the age of Big Data corporations are in search of the truth in customer behavior, buying preferences, and managing the risk of strategic plans. However, even with the assistance of advanced analytics we are more “archaeologists “ than true scientists. Archaeologists apply a body of knowledge and a great deal of conjecture in constructing their view of the past. Each new discovery has the potential to disrupt or partially validate assumptions in our belief about what ancient civilizations or animals were really like. We don’t have enough information to confirm these conjectures but instead believe them in the absence of data that fails to contradict them. This is the crude method in how humans learn — through trial and error. If something is proven to work reasonably well over time it becomes the truth. If it is fails, miserably, it is considered to not be the truth. But we know from scientific experiments that truth can be derived from failures, even massive failures like the space shuttle catastrophe or major battles in war. We “learn” from mistakes and vow to never repeat them again.
The truth is we seldom, if ever, have perfect information. Imperfect information is uncertainty NOT a risk. Risk is a known quantity. It can be measured and we know to avoid it or accept it and that is why we call it a risk. The failure in risk management is not knowing the difference. Fear, confusion, and hope are signs of uncertainty and are emotional signals that we have crossed the Rubicon of not knowing whether the outcomes will result in losses or gains. This is when risk managers become archaeologists. Archaeological risk managers try to develop stories from past experience and imperfect information to describe the new truths using old methods. This happens in every industry from insurance to financial services and beyond and partly explains why we miss really big emerging risks until a “learning” experience teaches us what a risk really looks like.
Fear, confusion and hope are natural responses in our primitive brain of “Fight vs Flight” mechanisms of survival. These emotional responses are also signals that we must tread lightly, gather information gradually and take measured risks without betting the farm on a new shiny thing that may be a train coming through the tunnel of darkness.
How can risk professionals avoid the freight train? Don’t be afraid to say you don’t know. When worry, fear, and confusion permeates communications that is a signal a freight train may be barreling down the tracks. Instead you must use this time to understand what you know and separate what you don’t know. Understanding the difference is critical because it provides risk managers with direction to gather information, perform advanced assessments and provides definable boundaries where risks may be lurking. It is also important to understand that huge potential is the other side of uncertainty. Big rewards can be found when uncertainty is at its highest level however risk professionals must have a measure approach to understanding the upside of uncertainty.
This is not the time to follow the crowd.
The upside of uncertainty requires risk managers to seek opportunity where others are fleeing or cannot see how the change in the new rules may benefit organizations poised to leverage change. What risk professionals must avoid during uncertainty is becoming archaeologists. Old methods may help to tell a compelling story but the real risks and upside to uncertainty will be lost as the new rules obscure what the truth really is.