In my previous articles, I introduced Human-Centered risk management and the role that Cognitive Risk Governance should play in designing the risk and control environment outcomes that you want to achieve. One of the key outcomes was briefly described as situational awareness that includes the tools and ability to recognize and address risks in real time. In this article, I will delve deeper into how to redesign the organization using cognitive tools while reimagining how risks will be managed in the future. Before I explore “the how” let’s take a look at what is happening right now.
This concept is not some futuristic state! On the contrary, this is happening in real-time. BNY Mellon, one of the oldest firms on Wall Street has started a transformation to a cognitive risk governance environment. Mellon is not the only Wall Street titan leading this charge. JP Morgan, BlackRock, and Goldman Sachs are hiring Silicon Valley talent among others to transform banking, in part, to remain competitive and to strategically reduce costs, innovate and build scale not possible with human resources. The banks have taken a very targeted approach to solve specific areas of opportunity within the firm and are seeking new ways to introduce innovation to customer service, new product development and create efficiencies that will have profound implications for risk, audit, compliance and IT now and in the foreseeable future
As these early stage projects expand the transformation that is taking place today will position these firms with competitive advantages few can anticipate. I do not know the business plans of BNY Mellon, JP Morgan, BlackRock or Goldman Sachs but it is safe to say that each of these firms will see the benefits of implementing targeted solutions with smart systems to augment decision-making and drive growth. They may also reduce risks in the process. However, as these firms grow their smart technology portfolio it will become obvious that a strategic plan must include an overarching Cognitive Risk Governance program that goes deeper than IT efficiencies, investment management and one-off cost savings in contract reviews. I applaud the approach these firms are taking but these are low-lying “tactical fruit”, but one must start somewhere!
The real question is what role will risk management, audit, and compliance play in this new cognitive risk era? Will oversight functions continue to be observers of change or leaders in change with a risk framework that contemplates an enterprise approach to smart systems? Will oversight functions seek opportunity in this new cognitive risk era or choose to ignore the growth of these advances?
The Cognitive Risk Framework for Enterprise Risk Management has been presented in earlier articles as a set of pillars that include human elements integrated with technology because technology alone is not enough! Smart systems will reduce costs, in some cases, redundant staff and in other cases reduce the need to add people to build scale and more. However, without a more comprehensive approach the limits of a technology-only strategy will become obvious as soon as the cost savings decline.
If firms truly want to create a multiplier effect of cost savings and scale the transformation must include technology that assists humans to become more productive!
If operational and residual risks represent the bulk of inefficient bottlenecks or have limited a firm’s ability to respond quickly to changes in the business environment a well-designed cognitive risk framework offers firms the ability to free up the back and middle office environment. How so?
Introduction to Intentional Control Design, Machine Learning & Situational Awareness
First, automation trumps big data analytics!
I know that Big Data, Predictive Analytics, Machine Learning and Artificial Intelligence sound sexy, seems cool and is the future! But let’s work in the real world for a moment. Google has made great advances in machine learning but if you actually take the time to read their research literature (since about 1% or less of the pundits do) you will find that the actual use cases have been limited. The real opportunities involve routine processes with very large pools of data that is well defined.
You can’t teach a machine to be smart with dumb data
If you have unlimited resources or simply want to throw away money then start a Big Data project with unstructured, random data! Some may argue the benefits of this approach but consider this. Most firms produce petabytes of structured data every single day in production environments that are rarely leveraged to its full capacity. Why not start with a good data source, automate the processes that produce this data to assist humans in getting their jobs done more efficiently? Want to ensure internal controls work flawlessly? Automate them! Want to ensure compliance with regulatory mandates? Automate it! Want to produce real-time audit sampling and monitoring? Automate it!
Design the risk, compliance, IT and audit outcomes that you need! Intentional Control Design takes advantage of machine learning in the most efficient manner through the corpus of data that exists in production data.
Once you do that you have your big data projects solved! Need audit data to test compliance? Done! Need risk assessments with real data? Done! Need to check fraudulent activity? Done!
If you want to create situational awareness for how your firm is operating in real time design it! Automation trumps Big Data analytics, but most get this backwards!
Unstructured data requires human annotation, which increases costs exponentially so why start there? It may not be sexy but the money that you save will make you feel better than the money you lose chasing the glamor projects that add little value.
Automation gives you situational awareness through true transparency! Transparency gives the Board and senior management the ability to adjust in a more timely manner. If you want a no surprise business environment consider designing one……. It doesn’t happen by accident nor does it happen by threatening staff to not make mistakes!
Cars are safer today than 40 years ago because of design! Airline travel is safer today because of design. Amazon, Facebook, Google, and Apple have overtaken traditional business models by design!
There are a number of residual benefits that I haven’t discussed in detail yet like reduction in cyber risks, employee burnout, increased staff productivity and many more. I saved these for last because we always forget that humans are the real engines of business growth.
If you are still an unbeliever just take at look at the store closings in the retail industry by not listening to the change created by the internet and firms like Amazon. I understand that change is hard but without change it will be harder to keep up and survive in an environment that moves in nanoseconds!
Musings of a Cognitive Risk Manager
In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must reimagine risk management for the 21st century. I purposely did not get into the details right away because it is really important to understand WHY some “Thing” must change before change can really happen. In fact, change is almost impossible without understanding why.
Why put on sunscreen if you didn’t know that skin cancer is caused by too much exposure to ultraviolent rays from the sun? We know that drinking and driving is one of the deadly causes of highway fatalities BUT we still do it! Knowing the risk of some “Thing” doesn’t prevent us from taking the chance anyway. This is why diets are so hard to maintain or habits are so hard to change. We humans do irrational things for reasons that we don’t fully understand. That is precisely WHY we need Cognitive Risk Governance.
Cognitive risk governance is the “Designer” of human-centered risk management! The sunscreen is effective (if you use it properly!) because the formulation of the ingredients were designed to protect our skin from ultraviolent rays. Diets are designed to help us lose weight. Therefore, cognitive risk governance must also design the outcomes that we seek!
This is radically different from any other risk framework. If you take the time to study any framework, 99% of the guidance is focused on the details of the activity you must do first. Do risk assessments, develop internal controls, and create policies and procedures, blah blah blah …. The details are important but what if your focus is on the wrong stuff, which too often is the case! If you have ever heard the term, “Shoot first, then Aim” then you now fully understand why most risk frameworks don’t work.
The fallacy of action is the root cause of failure in risk management programs.
It is really important to understand this concept so let me provide an illustration. If you want to create a car with fuel efficiency you must first design the car to get more mileage with the same amount of fuel.
In order to achieve better efficiency you must understand why cars are not fuel efficient. In order to fully understand why cars are not fuel efficient manufacturers must reimagine the car.
However, before you start changing the car you must decide how efficient you want the car to become.
Design starts with imaging the end state then determining what steps to take to achieve the goal. This is how cognitive risk governance works in human-centered risk management.
The role of cognitive risk governance is to design new ways to reduce risks across the organization. In order to reduce risks we must understand why certain risks exist and determine the right reduction in risk we want to achieve. This is why cognitive risk governance is a radical departure from traditional risk management.
In contrast, traditional risk management advocates for a Top Ten list of risks or a Risk Repository that inventories events. Unfortunately, the goal seems to be focused on monitoring risks as opposed to risk reduction. Risks cannot be completely eliminated therefore any “activity-focused” risk program will always find new risks to add to the list. A human-centered risk management program is focused on reducing risks to acceptable levels through design. But not all risks! The focus is on complex risks!
Cognitive risk governance is the process of designing human-centered risk management to address the most complex risks. Any distribution of risk data will tell you that 75-80% of risks are high frequency – low impact risks yet traditional risk programs focus 90% of its energy dealing with the least important risks. The opportunity presented by a cogrisk governance model is to separate risks into appropriate levels of importance. Risks represent a range (distribution) of outcomes therefore one-dimensional approaches to address risks will inevitably not address the full range of complex risks.
Developing a Cognitive Risk Governance Tool Kit
The toolkit for designing cognitive risk governance involves an understanding of a few concepts that any organization can implement.
Cognitive risk governance starts with a clear understanding of the difference in “Uncertainty” and “Risks”. Uncertainty is simply what you do not know or don’t have clear insight into understanding the impacts of its occurrence. Risks are known but it doesn’t mean you fully understand the nature of these risks. I do not subscribe to the semantic exercise of Known-Known, Known-Unknowns, and Unknown-Unknowns. There is no rigor in this exercise nor does in provide new insights into solving problems of importance.
The next concept in a cogrisk governance program involves developing risk intelligence and active defense. Risk intelligence is quantitative and qualitative data from which analysts are better able to develop insights into complex risks. The processes of data management, data analysis, and the formulation of risk intelligence may require a multiple disciplinary team of experts depending on the complexity of the organization and its risk profile.
Active defense, on the other hand, is the process of implementing targeted solutions driven by risk intelligence to capture new opportunities and reduce risk exposures that impede growth. Risk Intelligence and active defense will require solutions and new tools that may not be in use in traditional risk programs. Organizations are generating petabytes of data that are seldom leveraged strategically to manage risk. A cogrisk governance program is responsible for designing risk intelligence and active defense in ways that leverage these stores of data as well as external sources of intelligence.
In traditional risk, the “Three Lines of Defense” model is a common approach used to defend the organization, yet to understand why some change is needed one need only look at how the military is re-engineering its workforce to a 21st century model to address the new battleground being fought with technology and cognition. It is no longer a reasonable assumption to expect an army of people with limited tools to be able to analyze the movement of petabytes of data into, across and outside of an organization with confidence.
The transformation in the military is being led by the Joint Chiefs of Command which is a corollary for Risk, Compliance, Audit, and IT professionals. Risk professionals must lead the change from 19th century risk practice to 21st century human-centered risk management. Existing risk frameworks such as COSO, ISO and Basel have laid a good foundation from which to build but more needs to be done.
I will address these opportunities in more detail in subsequent articles but for now let’s move to the next concept in a CogRisk governance model. The intersection of human-machine interactions has been identified as a critical vulnerability in cyber security. However, poorly designed workstations that require employees to cobble together disparate data and systems to complete work tasks represent inefficiencies that create unanticipated risks in the form of human error.
The intersection of the human-machine interaction represents two significant opportunities in a human-centered risk management program. The first opportunity is an improvement in cybersecurity vulnerability and the second is the capture of more efficient processes in productivity gains and reductions in high frequency, low impact risks. I will defer a discussion on the opportunity to improve cybersecurity to subsequent articles because of the scope of the discussion. However, I do want to mention that a focus on reducing human error risks is unappreciated.
The equation is a simple one but very few organizations ever take the time to calculate the cost of inefficiency even in firms with advanced Six Sigma programs. Here is an oversimplified model: Human error (75%) + Uncontrollable risks (25%) = operational inefficiency (100%). From here it is easy to see the benefit of human-centered risk management. This is obviously a simplified model, including the statistical data, but not one far from reality if you look at empirical cross-industry analysis.
Human-centered risk management focuses on redesigning the causes of human error providing real payback in efficiency and business objectives. A risk program designed to facilitate safe and efficient interactions with technology improves risk management and helps grow business. More on that topic later!
In the next article, I will discuss Intentional Control design and practical use cases for machine learning and artificial intelligence in risk management.
As I have done in previous articles, I invite others to become active participants in helping design a human-centered risk management program and contribute to this effort. If you are a risk professional, auditor, compliance officer, technology vendor or simply an interested party, I hope that you see the benefit of these writings and contribute if you have real-life examples.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
Musings of a Cognitive Risk Manager
Before beginning a discussion on human-centered risk it is important to provide context for why we must consider new ways of thinking about risk. The context is important because the change impacting risk management has happened so rapidly we have hardly noticed. If you are under the age of 25 you take for granted the Internet, as we know it today, and the ubiquitous utility of the World Wide Web. Dial-up modems were the norm and desktop computers with “Windows” were rare except in large companies. Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.
What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks. Is it possible that risks and the processes for measuring risk should remain static? Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future? Why are qualitative self-assessments still a common approach for measuring disparate risks? More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?
The constant is human behavior!
Technology has been successful at helping us get more done when and wherever we need to conduct business. The change brought on by innovation has nearly eliminated the separation of our work and personal lives, as a result, businesses and individuals are now exposed to new risks that are harder to understand and measure. The semi-state of hardened enterprise but soft middle has created a paradox in risk management. The paradox of Robust Yet Fragile. Organizations enjoy robust technological capability to network, partner and conduct business 24/7 yet we are more vulnerable or fragile to massive systemic risks. Why are we more fragile?
The Internet is the prototypical example of a complex system that is “scale-free” with a hub-like core structure that makes it robust to random loss of individual nodes yet fragile to targeted attacks on highly connected nodes or hubs. Likewise, large and small corporations are beginning to look more like diverse forms of complex systems with increased dependency on the Internet as a service model and a distributed network of vendors who provide a variety of services no longer deemed critical or cost effective to perform in house.
Collectively, organizations have leveraged complex systems to respond to customer and stakeholder demands to create value, unwittingly, becoming more exposed to fragility at critical junctures. Systemic fragility has been tested during recent denial of service attacks (DDoS) on critical Internet service providers and recent ransomware attacks both which spread with alarming speed. What changed? After each event risk, professionals breathe a sigh of relief and continue pursuing the same strategies that leave organizations vulnerable to massive failure. The Great Recession of 2009 is yet another example of the fragility of complex systems and a tepid response to systemic risks. Do we mistakenly take survival as a sign of a cure to the symptoms of systemic illness?
After more than 20 years of explosive productivity growth the layering of networked systems now pose some of the greatest risks to future growth and security. Inexplicably, productivity has stalled because humans are becoming the bottleneck in infrastructure. Billions of dollars are currently rushing in to finance the next phase of Internet of Things that will extend our vulnerabilities to devices in our homes, our cars, and eventually more. Is it really possible to fully understand these risks with 19th century risk management?
The dawn of the digital economy has resulted in the democratization of content and the disintermediation of past business models in ways unimaginable 20 years ago. I will spare you the boring science behind the limits of human cognition but let’s just say that if you can’t remember what you had for dinner last Wednesday night you are not alone.
But is that enough reason to change your approach to risk management? Not surprisingly, the answer is Yes! Acknowledging that risk managers need better tools to measure more complex and emerging risks should no longer be considered a weakness. It also means that expecting employees to follow, without fail or assistance, the growing complexity of policies, procedures and IT controls required to deal with a myriad of risks may be unrealistic without better tools. 21st century risk management approaches are needed to respond to the new environment in which we now live.
Over the last 30 years, risk management programs have been built “in response” to risk failures in systems, processes and human error. Human-centered risk management starts with the human and redesigns internal controls to optimize the objectives of the organization while reducing risks. This may sound like a subtle difference but it is, in fact, a radically different approach but not a new one.
Human-factors engineers first met in 1955 in Southern California but [its] contributions to safety across diverse industries is now under-appreciated. We don’t give a second thought to the technology that protects us when we travel in our cars, trucks and airlines or undergo complex medical procedures. These advances in risk management did not happen by accident they were designed into the products and services we enjoy today!
Each of these industries recognized that human error posed the greatest risks to the objectives of their respective organizations. Instead of blaming humans however they sought ways to reduce the complexity that leads to human error and found innovative ways to grow their markets while reducing risks. Imagine designing internal controls that are as intuitive as using a cell phone allowing employees to focus on the job at hand instead of being distracted by multitasking! A human-centered risk program looks at the human-machine interaction to understand how the work environment contributes to risk.
I will return to this concept in subsequent papers to explain how the human-machine interaction contributes to risk. For now, let’s suffice it to say that there is sufficient research and empirical data to support the argument. To further explain a human-centered risk approach we must also understand how decision-making is impacted as a result of 19th century risk practices.
Situational awareness is a critical component of human-centered risk management. One’s perception of events and comprehension of their meaning, the projection of their status after events have changed or new data is introduced, and the ability to predict how change impacts outcomes and expectations with clarity facilitate situational awareness. The opportunity in risk management is to improve situational awareness across the enterprise. Enterprise risks are important but they are not all equal and should not be treated the same. Situational awareness helps senior executives understand the difference.
The challenge in most organizations is that situational awareness is assumed as a byproduct of experience and training and seldom revisited when the work environment changes to absorb new products, processes or technology. The failure to understand this vulnerability in risk perception happens at all levels of the organization from the boardroom down to front-line. The vast majority of change introduced in organizations tend to be minor in nature but accumulate over time contributing to a lack of transparency or Inattentional Blindness impacting situational awareness. This is one of the many reasons organizations are surprised by unanticipated events. We simply cannot see it coming!
Human-centered risk management focuses on designing situational awareness into the work environment from the boardroom down to the shop floor. This multidisciplinary approach requires a new set of tools and cognitive techniques to understand when imperfect information could lead to errors in judgment and decision-making. The principles and processes for designing situational awareness will be discussed in subsequent articles. The goal of human-centered risk management is to design scalable approaches to improve situational awareness across the enterprise.
Human-factors design and situational awareness meet at the “cross roads of technology and the liberal arts” to quote the visionary Steven Jobs. These two factors in human-centered risk management can be achieved by selecting targeted approaches. These approaches will be discussed in more detail in subsequent articles however I invite others to participate in this discussion if you too have an interest in reimagining new approaches to risk management.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
Traditional risk managers have conducted business the same way for most of the last 30 years even as technology has advanced beyond the ability to keep pace. Through each financial crisis risk management has been presented with many opportunities to change but instead resort to the same approach and inevitable outcomes. As competitive pressures grow boards expect executives do more with less pushing risk professionals to adopt creative new ways to add value.
Risks are more complex and systemic in a digital economy with the potential to amplify across disparate vectors critical to business performance. Social media is just one of the many new amplifiers of risks that must be incorporated into enterprise risk programs. Asymmetric risks, like Cyber risk, require a three-dimensional response that includes a deeper understanding of the complexity of the threat and simplicity of execution. The challenge of these more complex risks is even more daunting given the speed of business and distributed nature of data in an interconnected digital economy.
The WannaCrypt cyber attack is just another example of how human behavior has become the key amplifier of risks in a digital economy and an example of how situational awareness is part of the solution. There are many stories and opinions about the events and circumstances of the attack and more details will emerge over time. The truth is that the world got lucky because of the astute actions of one person whose quick actions unintentionally stopped the spread of the virus before broad damage could be done. No one should breathe a sigh of relief because now the attackers are aware of the mistake they made and will, no doubt, correct and learn new ways to exploit weaknesses more effectively. The real question is what did we learn?
The answer is it’s not clear, yet! What is clear is that cyber threats will continue to find ways to exploit the human element requiring new approaches to understand the risk and find new solutions. But I digress….
The purpose of these musings is to introduce the emergence of a cognitive era in risk and propose a path for adopting a human-centered strategy for addressing asymmetric complexity in enterprise risk. The themes I will present in a series of articles will be used to build a case for a supplemental approach in risk that incorporates an understanding of vulnerabilities at the human-machine interaction, human-factor design in internal controls; and, introduce new technologies to enhance performance in managing and reducing human judgment error for complex risks.
Technology has evolved from a tool designed to free up humans from manual work to the development of information networks creating knowledge workers from the boardrooms of Wall Street to the factory floor. The excess capital created by technology is now being reinvested in next generation tools for more advanced uses.
Innovations in machine learning, artificial intelligence and other smart technologies promise even greater opportunity for personal convenience and wealth creation. Risk professionals must begin to understand the methods used in these cognitive support tools in order to evaluate which ones work best to address complex risks. The emergence of smart technology in business applications is growing rapidly however the range of capability and outcomes vary widely for many solutions therefore an understanding of the limitations of each vendor’s predictive powers are important. Contrarily, the rapid advancement of technological innovation has also created a level of complexity that is contributing to the spread of risks in ways that are hard to imagine. It now appears that we are not connecting the dots between the inflection point of technology and human behavior. This is a complex discussion that requires a series of articles to fully unpack.
Risk professionals must begin to understand how human behavior contributes to risk as well as the vulnerabilities at the human – machine interaction. Human error is increasingly cited as the leading cause of risk events in cross industry data such as IT risk, healthcare, automotive, aeronautics and others. [i][ii][iii][iv][v] Unfortunately, risk strategies incorporating human-factors have been widely underrepresented in many risk programs to date. That may be changing! At the core of this change is one constant – humans! Risk professionals who combine “human factors” design with advanced analytical approaches and behavioral risk controls will be better positioned to bring real value to business strategy.
In the world in which we live and breathe, “trust” is developed over repeated interactions between parties with whom a relationship has been built. In the world of the Internet, trust is established much more quickly and subconsciously based on cognitive queues of similarity or credibility that are not always reliable. This apparent conflict of trust paradigm is the Trust Conundrum. The trust conundrum weakness has become the preferred and most successfully executed attack posture for hackers to exploit due to the relative ease of creating trust in the Internet. Cognitive hacks, or also known as; phishing, social engineering or by other names is the biggest threat in cybersecurity as the level of sophistication and variants of these attacks evolve.
Trust in the Internet is not a new or novel topic for those who have followed these trends over many years. In 2003, the University of Pennsylvania’s Lions Center was created to study cyber security, information privacy and trust.  The center was established in 2003 to serve three main purposes: (a) conduct research to detect and remove threats of information misuse to the human society: mitigate risk, reduce uncertainty, and enhance predictability and trust; (b) produce leading scholars in interdisciplinary cyber-security research; and (c) become a national leader in information assurance education. In the same year, the University of Oxford’s Oxford Internet Institute produced a research report titled, “Trust in the Internet: The Social Dynamics of an Experience Technology”. Today’s headlines would suggest that we have much more to learn about trust in the Internet.
After reviewing a variety of studies on the topic of trust in the Internet the general findings conclude that we have a healthy level of skepticism while conducting business in the Internet due to the perceived risks yet we trust the Internet to conduct an ever-expanding list of services. The studies suggest that our use and behavior on the Internet is driven by trust. Generally speaking, the more we use the Internet the more trust we have, a concept called cybertrust. Conversely, we trust (“net confidence”) the Internet more as our use increases exposing us to more threats (“net risks”). This conundrum is partly the reason why cyber attacks continue to grow unabated and demonstrate a huge and growing gap not fully addressed by either cyber security professionals, technology frameworks and standards or policies and procedures designed to mitigate these risks. These studies are dated and much more research on the topic of trust in the Internet is still needed but the initial research provides some insight into the root cause of the problem.
The tension between developing net confidence and the threat of net risks will not be solved in this article. The observation however is that consumer behaviors on the Internet are beginning to change. In a more recent survey posted on the blog of the website of the National Telecommunications & Information Administration (NTIA) for the U.S. Department of Commerce noted, “NTIA’s analysis of recent data shows that Americans are increasingly concerned about online security and privacy at a time when data breaches, cybersecurity incidents, and controversies over the privacy of online services have become more prominent. These concerns are prompting some Americans to limit their online activity, according to data collected for NTIA in July 2015 by the U.S. Census Bureau. This survey included several privacy and security questions, which were asked of more than 41,000 households that reported having at least one Internet user.”
The implications of these and other research suggests that if nothing is done the growth and huge economic benefits of ecommerce may be curtailed over time as “trust” diminishes as a result of increasing threats in cyberspace. The NTIA’s July 2015 survey found, “Nineteen percent of Internet-using households—representing nearly 19 million households—reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior.”
While most organizations have been primarily concerned with developing a defensive posture for internal security of customer data it is becoming increasingly clear that the development of trust will become a critical factor in the expansion of services and uses of the Internet by the government, business and the providers of new technology. Therefore, we are at the beginnings of a crossroads where innovation, growth and security may depend as much on developing trust in the Internet as it does on the features and benefits of products and services provided by the Internet. There are few easy solutions to this problem as demonstrated by the hacking of the DNC and the growth of breaches more broadly. However, given the lack of progress made since the early research into the issue of trust demonstrates that a more comprehensive approach is needed. Joint ventures from academia, industry, government and the military and law enforcement must be forged to address these issues of privacy, security and the open Internet. The window of opportunity may be closing.
The COSO ERM framework is being revised with a new tagline, Enterprise Risk Management – Aligning Risk with Strategy and Performance. Dennis Chelsey, PwC’s Global Risk Consulting leader and lead partner for the COSO ERM effort recently stated, “Enterprise risk management has evolved significantly since 2004 and stands at the verge of providing significant value as organizations pursue value in a complex and uncertain environment.” Chelsey goes on to state that, “This update establishes the relationship between risk and strategy, positions risk in the context of an organization’s performance, and helps organizations anticipate so that can get ahead of risk and embrace a mindset of resilience.”
Additionally, the ISO 31000:2009 risk framework is being revised as well. “The revision of ISO 31000:2009, Risk management – Principles and guidelines, has moved one step further to Draft International Standard (DIS) stage where the draft is now available for public comment,” according to the International Organization of Standardization’s website. As explained by Jason Brown, Chair of ISO’s technical committee ISO/TC 262, Risk management, “The message our group would like to pass on to the reader of the [DIS], Draft International Standard, is to critically assess if the current draft can provide the guidance required while remaining relevant to all organizations in all countries. It is important to keep in mind that we are not drafting an American or European standard, a public or financial services standard, but much rather a generic International Standard.”
And finally, the Basel Committee on Banking Supervision, is rolling out, in phases, its final updated reform measures (Basel III) to ensure bank capital and liquidity measures provide resilience in financial markets to systemic risks. The magnitude and breadth of these changes may feel overwhelming depending on where you sit on the spectrum of change impacting your business.
Likewise, more complex and systemic risks such as cybersecurity, prompted the National Institute of Standards and Technology to revise and update its Cybersecurity Framework not to mention changes to Dodd Frank, Healthcare and a host of other regulatory mandates. So where does the value proposition happen in risk management? Given the increasing velocity of change in business and regulatory requirements how does a risk professional in compliance, audit, risk and/or IT security demonstrate an effective and repeatable value proposition while struggling to keep pace?
In order to begin we must first acknowledge that, like risk management, the term “value” has very different meanings for different stakeholders. A shareholder’s definition of value will most likely be different than a customer’s definition. Given this context, we can focus on the “value” proposition derived from the role of a risk professional’s contribution to each stakeholder. However, we need more information to fully understand how a risk professional might approach this topic. If you are an internal auditor you may take a risk-based approach during the audits you perform. If your role is that of a regulatory compliance professional ensuring the effectiveness of internal controls, ethics and awareness are used to derive value. The same is true for the contributions each oversight team makes. In studying other risk professionals, I have begun to learn that I need to expand my definition of value to incorporate disciplines beyond my own skill set.
Sean Lyons, author of “Corporate Defense and The Value Preservation Imperative,” focuses on key strategies to preserve value by expanding the Corporate Defense model from 3 to 5 Lines of Defense creating an enterprise-wide risk approach. Andrea Bonime-Blanc, author of “The Reputation Risk Handbook,” has developed a focus on the importance of understanding the difference in Reputation Management and the role of Reputation Risk. Dr. Bonime-Blanc makes a compelling argument for understanding the strategic importance of developing clear steps to manage key risks within a firm that pose the greatest potential of damage to a firm’s reputation by adopting an enterprise risk approach to reputation risks. In thinking about where my practice adds value, I have proposed a Cognitive Risk Framework for Cybersecurity and extended the model to include enterprise risk management. The basis for a cognitive risk framework is derived from decades of research in behavioral economics, cognitive/decision science, and a deep look at the human-machine interaction as a way to infuse human elements into risk management much the same as automobile manufacturers, NASA & aerospace industries have redesigned the interior of their respective vehicles to account for human behavior to make the travel experience safer.
What is exciting about these and many more new developments in the risk profession is that “value” can be derived by each of these approaches. In fact, while each practice may seem uniquely different the differences compliment because risk is not one dimensional. The complexity of the risk profile of many firms has changed and evolved in ways that require more than one view on how to manage the myriad of threats facing a firm. The permutations of risk exposure will only expand given the velocity of change in technology and the speed of computing power being acquired by and expected of our competitors, customers, and adversaries alike.
The challenge for organizations is to not assume that a one dimensional approach to risk management is sufficient for dealing with three dimensional risks with a great deal of uncertainty.
The value proposition of risk management viewed from this perspective suggests that a cross-disciplinary approach is needed. Even greater value can be created by risk management through thoughtful design, value preservation and sustainable practices and behaviors. By this standard, risk management informs and supports the strategic plan through the value it [risk management] creates for each of its respective stakeholders. The lesson is that organizations should not get stuck in one dogmatic approach to managing risks while assuming it is sufficient for today’s risk environment. What we learn from others is simply another way that value is created for the organization.
“In 1981, Carl Landwehr observed that “Without a precise definition of what security means and how a computer can behave, it is meaningless to ask whether a particular computer system is secure.”[i]
Researchers George Cybenko, Annarita Giani, and Paul Thompson of Dartmouth College introduced the term “Cognitive Hack” in 2002 in an article entitled, “Cognitive Hacking, a Battle for the Mind”. “The manipulation of perception —or cognitive hacking—is outside the domain of classical computer security, which focuses on the technology and network infrastructure.”[i] This is why existing security practice is no longer effective at detecting, preventing or correcting security risks, like cyber attacks.
Almost 40 years after Landwehr’s warning cognitive hacks have become the most common tactic used by more sophisticated hackers or advanced persistent threats. Cognitive hacks are the least understood and operate below human conscious awareness allowing these attacks to occur in plain sight. To understand the simplicity of these attacks one need look no further than the evening news. The Russian attack on the Presidential election is the best and most obvious example of how effective these attacks are. In fact, there is plenty of evidence that these attacks were refined in elections of emerging countries over many years.
A March 16, 2016 article in Bloomberg, “How to Hack an Election” chronicled how these tactics were used in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala, and Venezuela long before they were used in the American elections.
“Cognitive hacking [Cybenko, Giani, Thompson, 2002] can be either covert, which includes the subtle manipulation of perceptions and the blatant use of misleading information, or overt, which includes defacing or spoofing legitimate norms of communication to influence the user.” The reports of an army of autonomous bots creating “fake news” or, at best, misleading information in social media and popular political websites is a classic signature of a cognitive hack.
Cognitive hacks are deceptive and highly effective because of a basic human bias to believe in those things that confirm our own long held beliefs or beliefs held by peer groups whether social, political or collegial. Our perception is “weaponized” without our knowledge or full understanding we are being manipulated. Cognitive hacks are most effective in a networked environment where “fake news” can be picked up in social media sites as trending news or “viral” campaigns encouraging even more readers to be influenced by the attacks without any sign an attack has been orchestrated. In many cases, the viral nature of the news is a manipulation through the use of an army of autonomous bots on various social media sites.
At its core the manipulation of behavior has been in use for years in the form of marketing, advertisements, political campaigns and in times of war. In the Great World Wars, patriotic movies were produced to keep public spirits up or influence the induction of volunteers to join the military to fight. ISIS has been extremely effective using cognitive hacks to lure an army of volunteers to their Jihad even in the face of the perils of war. We are more susceptible than we believe which creates our vulnerability to cyber risks and allows the risk to grow unabated in the face of huge investments in security. Our lack of awareness to these threats and the subtlety of the approach make cognitive hacks the most troubling in security.
I wrote the book, “Cognitive Hack, The New Battleground in Cybersecurity.. the Human Mind”, to raise awareness of these threats. Security professionals must better understand how these attacks work and the new vulnerabilities they create to employees, business partners and organizations alike. But more importantly, these threats are growing in sophistication and vary significantly requiring security professionals to rethink assurance in their existing defensive posture.
The sensitivity of the current investigation into political hacks by the House and Senate Intelligence Committees may prevent a full disclosure of the methods and approaches used however recent news accounts leave little doubt to their effect as described more than 14 years ago by researchers and more recently in Paris and Central and South American elections. New security approaches will require a much better understanding of human behavior and collaboration from all stakeholders to minimize the impact of cognitive hacks.
I proposed a simple set of approaches in my book however security professionals must begin to educate themselves of this new, more pervasive threat and go beyond simple technology solutions to defend their organization against them. If you are interested in receiving research or other materials about this risk or approaches to address them please feel free to reach out.
[i] C.E. Landwehr, “Formal Models of Computer Security,” Computing Survey, vol. 13, no. 3, 1981, pp. 247-278.
You must be logged in to view this document. Click here to login
Organizations are striving to manage projects more efficiently, yet many fail each year at great cost. Over 50% of all projects exceed budgets or miss deadlines. Project risk plays a big role. Sponsored Post
You must be logged in to view this document. Click here to login
TheGRCBlueBook combines risk advisory services with cutting edge research, a knowledge of the GRC marketplace and a platform for GRC solutions providers to educate and showcase their products and services to a global market for risk, audit, compliance and IT professionals seeking cost effective solutions to manage a variety of risks. Partner with TheGRCBlueBook to help educate corporate buyers about your GRC products and services.
Behavioral economics has only recently begun to garner gradual acceptance by mainstream economists as a rigorous discipline that may serve as an alternative perspective on decision-making. However, the broad acceptance and growing adoption of behavioral economic theories and concepts along with advancements in computational firepower present opportunities to put into practice practical applications for improving risk management practice. The goal of this article is to develop a contextual model of a cognitive risk framework for enterprise risk management that frames the limitations and possibilities for enhancing enterprise risk management by combining behavioral science with a more rigorous analytical approach to risk management. The thesis of this paper is that managers and staff are prone to natural limitations in Bayesian probability predictions as well as errors in judgment due in part of insufficient experience or data to draw reliably consistent conclusions with great confidence. In this context, a cognitive risk framework helps to recognize these limitations in judgment. The Cognitive Risk Framework for Cybersecurity and the Five Pillars of the framework have been offered as guides for developing an advanced enterprise risk framework to deal with complex and asymmetric risks such as cyber risks.
“A major task in organizing is to determine, first, where the knowledge is located that can provide the various kinds of factual premises that decisions require.” – Herbert Simon
In a 1998 critique of Amos Tversky’s contributions to behavioral economics (Laibson and Zeckhauser) discussed how Tversky systematically exposed the theoretical flaws in rationality by individual actors in the pursuit of perfect optimality. Tversky and Kahneman’s Judgment under Uncertainty: Heuristics and Biases (1974) and Prospect Theory (1979) demonstrated that actual decisions involve some error. “The rational choice advocates assume that to predict these errors is difficult or, in the more orthodox conception of rationality, impossible. Tversky’s work rejects this view of decision-making. Tversky and his collaborators show that economic rationality is systematically violated, and that decision-making errors are both widespread and predictable. This now incontestable point was established by two central bodies of work: Tversky and Kahneman’s papers on heuristics and biases, and their papers on framing and prospect theory.”
Much of Tversky and Kahneman’s contributions are less well known by the general public and misinterpreted as a purely theoretical treatment by some risk professionals. As researchers, Tversky and Kahneman were well versed in mathematics, which helped to shine light on systemic errors in complex probability judgments and the use of heuristics in inappropriate context. As groundbreaking as behavioral science has been in challenging economic theory, Tversky and Kahneman’s work centers on a narrow set of heuristics: representativeness, availability and anchoring as universal errors. The authors used these three foundational heuristics broadly to describe how decision-makers substitute mental shortcuts for probabilistic judgments resulting in biased inferences and a lack of rigor in making decisions under uncertainty.
Cognitive Risk Framework: Harnessing Advanced Technology for Decision Support
In the thirty years since Prospect Theory data analytics expertise and computational firepower have made significant progress in addressing the weakness in Bayesian probabilities recognized by Tversky and Kahneman. Additionally, the automotive industry and Apple Inc., among others, have been successful in incorporating behavioral science in product design to reduce risk, anticipate human error and improve the user experience adding value in financial results. This paper assumes that these early examples of progress point to untapped potential if applied in constructive ways. There are distractors, and even Tversky and Kahneman admitted to inherent weaknesses that are not easy to solve. For example, observers are skeptical that laboratory results may not replicate real-life situations; that arbitrary frames don’t reflect reality as well as a lack of mathematical predictive accuracy.
Since Laibson and Zeckhauser’s (1998) critique of Tversky’s contributions to economics a large body of research in cognition has evolved to include Big Data, Computational Neurosciences, Cognitive Informatics, Cognitive Security, Intelligent Informatics, and rapid early stage advancements in machine learning and artificial intelligence. A Cognitive Risk Framework is proposed to leverage the rapid advancement of these technologies in risk management however technology alone is not a panacea. Many of these technologies are evolving yet additional progress will continue in various stages requiring risk professionals to begin to consider how to formalize steps to incorporate these tools into an enterprise risk management program in combination with other human elements.
The Cognitive Risk Framework anticipates that as promising as these new technologies are they represent one pillar of a robust and comprehensive framework for managing increasingly complex threats, such as, cyber and enterprise risks. The Five Pillars include Intentional Controls Design, Intelligence and Active Defense, Cognitive Risk Governance, Cognitive Security Informatics, and Legal “Best Efforts” Considerations. A cognitive risk framework does not supplant other risk frameworks such as COSO ERM, ISO 31000 or NIST standards for managing a range of risks in the enterprise. A cognitive risk framework is presented to leverage the progress made in risk management and provide a pathway to demonstrably enhance enterprise risk using advanced analytics to inform decision-making in ways only now possible. At the core of the framework is an assumption about data.
One of the core tenets of Prospect Theory is the recognition of errors made in decision-making derived from small sample size or poor quality data. Tversky and Kahneman noted several observations where even very skilled researchers routinely made errors of inference derived from poor sampling techniques. Many recognize the importance of data however organizations must anticipate that a cross-disciplinary team of expertise is needed to actualize a cognitive risk framework. Data will become either the engine of a cognitive risk framework or its Achilles Heel and may be the most underestimated investment in ramping up a cognition driven risk program. A cognitive risk framework anticipates much more diverse skills than currently exists in risk management and IT security.
Data is but one of the considerations in developing a robust cognitive risk framework. Other considerations will include developing structure and processes that allow ease of adoption by practitioners across multiple industries and in different size organizations. While it is anticipated that a cognitive risk framework can be successfully implemented in large and small organizations risk professionals may decide to adopt a modified version of the Five Pillars or develop solutions to address specific risks such a cybersecurity as a standalone program. It is anticipated that if cognitive risk frameworks are adopted more broadly that technology firms and standards organizations would take an active role in developing complementary programs to leverage these frameworks to advance enterprise risk using advanced analytics and cognitive elements.
 LAIBSON/ZECKHAUSER Kluwer Journal @ats-ss8/data11/kluwer/journals/risk/v16n1art1 COMPOSED: 03/26/98 11:00 am. PG.POS. 2 SESSION: 15