In this webinar we will look at cognitive security – the concept of using data mining, machine learning, natural language processing and human-computer interaction to mimic the way the human brain functions and learns – in order to help fight cybercrime.
If you spend any time on social media, viewing online news stories or read blog posts from pundits and self-described experts and consultants [present company included] you will notice that the ratio of “jargon” to information is rising rapidly. This is especially true in enterprise risk management, machine learning, artificial intelligence, data analysis and other fields where opinions are diverse because real expertise is in short supply.
This is a real problem on many fronts because jargon obscures the transfer of actionable information and makes it harder to make decisions that really matter. So I looked up the definition of “jargon”.
“Jargon: special words or expressions that are used by a particular profession or group and are difficult for others to understand.”
Well intended people use jargon to portray a sense of expertise in a particular subject-matter to those of us seeking to learn more and understand how to make sense of the information we are reading. The problem is that neither the speaker nor the listener is really exchanging meaningful information. In an era where vast amounts of misinformation is a mouse click away we must begin to speak clearly.
Critical thinking is the product of objective analysis and the evaluation of an issue to make an informed decision. However, because we are human what we believe can be based on biased information from peer groups, background, experience, political leanings, family experience and other factors both conscious and sub-conscious.
In an era where “truth” is malleable critical thinkers are more important than ever. This is especially relevant to risk professionals. The jargon in risk management is destroying the practice and profession of risk management.
Yes, these are strong words but we must be honest about what is not working. We, the collective “we”, use words like Risk Appetite, Risk Register, Risk Value, Risk Insights, or my favorite, “the ability to look around corners”; as if everyone understands what they mean and how to use these words to define some process that leads to awareness. The practice of risk management does not endow the practitioner with the ability to see the future. Done well, risk management, is the process of reducing uncertainty BUT only in certain situations!
Let’s stop expecting super human feats of wisdom in risk management that no one has ever demonstrated consistently over time.
We call risk frameworks a risk program when it is only an aspirational guide for what goes in a risk program not what you do to understand and address risks. The truth is the reason that there is so much jargon in risk management is because we know very little about how to do it well. Fortunately, the truth is much more simple than the jargon from uninformed pundits who would have you believe otherwise. Risk management is much more simple and less omniscient than the hype surrounding it. This may be disappointing to hear and many may argue against this narrative but let’s examine the truth.
Think of risk management as an Oak tree with one trunk but many branches. Economics is the trunk of the Oak tree of risk management with many branches of decision science that include the science of advanced analytics and human behavior among many others.
Economists and a Psychologist are the only ones who have ever won a Nobel Prize in the science of risk management.
Risk management was NOT invented by COSO ERM, consultants like McKinsey & Co. or applied mathematicians however many disciplines have played an active role in advancing the practice of risk management which is still in its infancy of development. Risk management is challenging because unlike the laws of physics which can be understood and modeled according to scientific methods the laws of human nature consistently defy logic. One look at today’s headlines is all you need to understand the complexity of risk management in any organization.
As the Oak tree of risk management grows new branches are needed such as data science, data management, cognitive system design, ergonomics, intelligent technology and many other disciplines. I created the Cognitive Risk Framework for Enterprise Risk Management and Cybersecurity to make room for the inevitable growth and diversity of disciplines that will evolve through the practice of risk management. It too is an aspiration of what a risk program can become. Risks are not some static “thing” that can be tamed into obedience by one approach, a simple focus on internal controls or the next hot trend in technology. Risk management must continue to evolve and so must those of us who are passionate about learning to get better at managing risks.
Let me leave you with one new word of jargon that is growing rapidly. Signal. The word Signal is being used in Big Data conversations to distinguish how to separate out the noise of Big Data from real insights to understand what customers want, identify trends and insights in data, and understand risks. How is that for a multi-jargonistic sentence?
Not surprisingly, McKinsey has jumped on this band wagon to tell the listener they too must separate the signal from the noise. Like all jargon, few tell you how only that you must do these things. What only a few will tell you is that the challenge of identifying the signal, insight, value or substitute whatever jargon you like is to develop a multi-disciplinary approach.
The cognitive risk framework for enterprise risk and cyber security was developed to start a conversation about how to begin the “how” of the evolution of risk management into what it will become not some imaginary end state of risk management.
Cognitive Hack addresses an area of cybersecurity that has not been vastly explored—the human element. Most cybersecurity authors focus on how technology can be used and/or adapted to make an enterprise’s infrastructure secure. Bone, a risk advisory consultant and an editor, aims “to introduce readers to the evolution of emerging technologies …” and to “address what some believe to be the weakest link in cybersecurity—the human mind.”
The author examines six distinct areas: understanding various vulnerabilities, exploring advances in situational awareness, “the cyber paradox,” the risk of relying solely on industry reports, delving into a hacker’s mind, and providing a “cognitive risk framework” for cybersecurity. In each of these topics, Bone uses real-world examples of security breaches and how the human element effected the severity of the breach. He also supplies ways the human element could have been mitigated in the breach, thus lessening the severity. In addition, Bone explains that cognitive hacking is in its infancy, and much work and research still needs to be completed. For those interested in the topic, he lists several areas where further research is needed.
–T. Farmer, Arkansas State University
A division of the American Library Association
Editorial Offices: 575 Main Street, Suite 300, Middletown, CT 06457-3445
Phone: (860) 347-6933
Fax: (860) 704-0465
CRC Press Inc
The following review appeared in the October 2017 issue of CHOICE:
To read more click link below:
“Intelligent Automation” is such a new term that you won’t find it in Wikipedia or Merriam-Webster. However, we are clearly in the early stages of a technological transformation that’s no less dramatic than the one spurred by the emergence of the Internet.
A new age in quantitative and empirical methods will change how businesses operate as well as the role of traditional finance professionals. To compete in this environment, finance teams must be willing to adopt new operating models that reduce costs and improve performance through better data. In short, a new framework is needed for designing an “intelligent organization.”
The convergence of technology and cognitive science provides finance professionals with powerful new tools to tackle complex problems with more certainty. Advanced analytics and automation will increasingly play bigger roles as tactical solutions to drive efficiency or to help executives solve complex problems.
But the real opportunities lie in reimaging the enterprise as intelligent organization — one designed to create situational awareness with tools capable of analyzing disparate data in real or near-real time.
Automation of redundant processes is only the first step. An intelligent organization strategically designs automation to connect disparate systems (e.g., data sources) by enabling users with tools to quickly respond or adjust to threats and opportunities in the business.
Situational awareness is the product of this design. In order to push decision-making deeper into the organization, line staff need the tools and information to respond to change in the business and the flexibility to adjust and mitigate problems within prescribed limits. Likewise, senior executives need near-real time data that provides the means to query performance across different lines of business with confidence and anticipate impacts to singular or enterprise events in order to avoid costly mistakes.
Financial reporting is becoming increasingly complex at the same time finance professionals are being challenged to manage emerging risks, reduce costs, and add value to strategic objectives. These competing mandates require new support tools that deliver intelligence and inspire greater confidence in the numbers.
Thankfully, a range of new automation tools is now available to help finance professionals achieve better outcomes against this dual mandate. However, to be successful finance executives need a new cognitive framework that anticipates the needs of staff and provides access to the right data in a resilient manner.
This cognitive framework provides finance with a design road map that includes human elements focused on how staff uses technology and simplifying the rollout and implementation of advanced analytical tools.
The framework is composed of five pillars, each designed to complement the others in the implementation of intelligent automation and the development of an intelligent organization:
- Cognitive governance
- Intentional control design
- Business intelligence
- Performance management
- Situational awareness
Cognitive governance is the driver of intelligent automation as a strategic tool in guiding organizational outcomes. The goal of cognitive governance, as the name implies, is to facilitate the design of intelligent automation to create actionable business intelligence, improve decision-making, and reduce manual processes that lead to poor or uncertain outcomes.
In other words, cognitive governance systematically identifies “blind spots” across the firm then directs intelligent automation to reduce or eliminate the blind spots.
The end game is to create situational awareness at multiple levels of the organization with better tools to understand risks, errors in judgment, and inefficient processes. Human error as a result of decision-making under uncertainty is increasingly recognized as the greatest risk to organizational success. Therefore, it is crucial for senior management create a systemic framework for reducing blind spots in a timely manner. Cognitive governance sets the tone and direction for the other four pillars.
Intentional control design, business intelligence, and performance management are tools for creating situational awareness in response to cognitive governance mandates. A cognitive framework does not require huge investments in the latest big data “shiny objects.” It’s not necessary to spend millions on machine learning or other forms of artificial intelligence. Alternative automation tools for simplifying operations are readily available today, as is access to advanced analytics, for organizations large and small, from a variety of cloud services.
However, for firms that want to use machine learning/AI, a cognitive framework easily integrates any widely used tool or regulatory risk framework. A cognitive framework is focused on a factor that others ignore: how humans interact with and use technology to get their work done most effectively.
Network complexity has been identified as a strategic bottleneck in response times for dealing with cybersecurity risks, cost of technology, and inflexibility in fast-paced business environments. Without a proper framework, improperly designed automation processes may simply add to infrastructure complexity.
There is also a dark side to machine learning/AI that organizations must understand in order to anticipate best use cases and avoid the inevitable missteps that will come with autonomous systems. Microsoft learned a hard lesson with “Clippy,” its Chatbot project, which was shelved when users taught the bot racist remarks. While there are many uses for AI, this technology is still in an experimental stage of growth.
Overly complicated approaches to intelligent automation are the leading cause of failed big data projects. Simplicity is the new value proposition that should be expected from the implementation of technology solutions. Intelligent automation is one tool to accomplish that goal, but execution requires a framework that understands how people use new technology effectively.
Simplicity must be a strategic design imperative based on a framework for creating situational awareness across the enterprise.
James Bone is a cognitive risk consultant; a lecturer at Columbia University’s School of Professional Studies; founder of TheGRCBlueBook.com, an online directory of governance, risk, and compliance tools; and author of, “Cognitive Hack: The New Battleground in Cybersecurity … the Human Mind.”
To see the post in CFO magazine click the link above
This marcus evans event will enable banks to establish an effective IRRBB framework to gain competitive ground. The meeting will explore how banks can steer the balance sheet to defend against interest rate risk sensitivities, advancements to systems and behavioural models under the IRRBB, developments in the IRRBB EBA guidelines and stress testing requirements and the separation between the banking and trading book.
In my previous articles, I introduced Human-Centered risk management and the role that Cognitive Risk Governance should play in designing the risk and control environment outcomes that you want to achieve. One of the key outcomes was briefly described as situational awareness that includes the tools and ability to recognize and address risks in real time. In this article, I will delve deeper into how to redesign the organization using cognitive tools while reimagining how risks will be managed in the future. Before I explore “the how” let’s take a look at what is happening right now.
This concept is not some futuristic state! On the contrary, this is happening in real-time. BNY Mellon, one of the oldest firms on Wall Street has started a transformation to a cognitive risk governance environment. Mellon is not the only Wall Street titan leading this charge. JP Morgan, BlackRock, and Goldman Sachs are hiring Silicon Valley talent among others to transform banking, in part, to remain competitive and to strategically reduce costs, innovate and build scale not possible with human resources. The banks have taken a very targeted approach to solve specific areas of opportunity within the firm and are seeking new ways to introduce innovation to customer service, new product development and create efficiencies that will have profound implications for risk, audit, compliance and IT now and in the foreseeable future
As these early stage projects expand the transformation that is taking place today will position these firms with competitive advantages few can anticipate. I do not know the business plans of BNY Mellon, JP Morgan, BlackRock or Goldman Sachs but it is safe to say that each of these firms will see the benefits of implementing targeted solutions with smart systems to augment decision-making and drive growth. They may also reduce risks in the process. However, as these firms grow their smart technology portfolio it will become obvious that a strategic plan must include an overarching Cognitive Risk Governance program that goes deeper than IT efficiencies, investment management and one-off cost savings in contract reviews. I applaud the approach these firms are taking but these are low-lying “tactical fruit”, but one must start somewhere!
The real question is what role will risk management, audit, and compliance play in this new cognitive risk era? Will oversight functions continue to be observers of change or leaders in change with a risk framework that contemplates an enterprise approach to smart systems? Will oversight functions seek opportunity in this new cognitive risk era or choose to ignore the growth of these advances?
The Cognitive Risk Framework for Enterprise Risk Management has been presented in earlier articles as a set of pillars that include human elements integrated with technology because technology alone is not enough! Smart systems will reduce costs, in some cases, redundant staff and in other cases reduce the need to add people to build scale and more. However, without a more comprehensive approach the limits of a technology-only strategy will become obvious as soon as the cost savings decline.
If firms truly want to create a multiplier effect of cost savings and scale the transformation must include technology that assists humans to become more productive!
If operational and residual risks represent the bulk of inefficient bottlenecks or have limited a firm’s ability to respond quickly to changes in the business environment a well-designed cognitive risk framework offers firms the ability to free up the back and middle office environment. How so?
Introduction to Intentional Control Design, Machine Learning & Situational Awareness
First, automation trumps big data analytics!
I know that Big Data, Predictive Analytics, Machine Learning and Artificial Intelligence sound sexy, seems cool and is the future! But let’s work in the real world for a moment. Google has made great advances in machine learning but if you actually take the time to read their research literature (since about 1% or less of the pundits do) you will find that the actual use cases have been limited. The real opportunities involve routine processes with very large pools of data that is well defined.
You can’t teach a machine to be smart with dumb data
If you have unlimited resources or simply want to throw away money then start a Big Data project with unstructured, random data! Some may argue the benefits of this approach but consider this. Most firms produce petabytes of structured data every single day in production environments that are rarely leveraged to its full capacity. Why not start with a good data source, automate the processes that produce this data to assist humans in getting their jobs done more efficiently? Want to ensure internal controls work flawlessly? Automate them! Want to ensure compliance with regulatory mandates? Automate it! Want to produce real-time audit sampling and monitoring? Automate it!
Design the risk, compliance, IT and audit outcomes that you need! Intentional Control Design takes advantage of machine learning in the most efficient manner through the corpus of data that exists in production data.
Once you do that you have your big data projects solved! Need audit data to test compliance? Done! Need risk assessments with real data? Done! Need to check fraudulent activity? Done!
If you want to create situational awareness for how your firm is operating in real time design it! Automation trumps Big Data analytics, but most get this backwards!
Unstructured data requires human annotation, which increases costs exponentially so why start there? It may not be sexy but the money that you save will make you feel better than the money you lose chasing the glamor projects that add little value.
Automation gives you situational awareness through true transparency! Transparency gives the Board and senior management the ability to adjust in a more timely manner. If you want a no surprise business environment consider designing one……. It doesn’t happen by accident nor does it happen by threatening staff to not make mistakes!
Cars are safer today than 40 years ago because of design! Airline travel is safer today because of design. Amazon, Facebook, Google, and Apple have overtaken traditional business models by design!
There are a number of residual benefits that I haven’t discussed in detail yet like reduction in cyber risks, employee burnout, increased staff productivity and many more. I saved these for last because we always forget that humans are the real engines of business growth.
If you are still an unbeliever just take at look at the store closings in the retail industry by not listening to the change created by the internet and firms like Amazon. I understand that change is hard but without change it will be harder to keep up and survive in an environment that moves in nanoseconds!
Musings of a Cognitive Risk Manager
In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must reimagine risk management for the 21st century. I purposely did not get into the details right away because it is really important to understand WHY some “Thing” must change before change can really happen. In fact, change is almost impossible without understanding why.
Why put on sunscreen if you didn’t know that skin cancer is caused by too much exposure to ultraviolent rays from the sun? We know that drinking and driving is one of the deadly causes of highway fatalities BUT we still do it! Knowing the risk of some “Thing” doesn’t prevent us from taking the chance anyway. This is why diets are so hard to maintain or habits are so hard to change. We humans do irrational things for reasons that we don’t fully understand. That is precisely WHY we need Cognitive Risk Governance.
Cognitive risk governance is the “Designer” of human-centered risk management! The sunscreen is effective (if you use it properly!) because the formulation of the ingredients were designed to protect our skin from ultraviolent rays. Diets are designed to help us lose weight. Therefore, cognitive risk governance must also design the outcomes that we seek!
This is radically different from any other risk framework. If you take the time to study any framework, 99% of the guidance is focused on the details of the activity you must do first. Do risk assessments, develop internal controls, and create policies and procedures, blah blah blah …. The details are important but what if your focus is on the wrong stuff, which too often is the case! If you have ever heard the term, “Shoot first, then Aim” then you now fully understand why most risk frameworks don’t work.
The fallacy of action is the root cause of failure in risk management programs.
It is really important to understand this concept so let me provide an illustration. If you want to create a car with fuel efficiency you must first design the car to get more mileage with the same amount of fuel.
In order to achieve better efficiency you must understand why cars are not fuel efficient. In order to fully understand why cars are not fuel efficient manufacturers must reimagine the car.
However, before you start changing the car you must decide how efficient you want the car to become.
Design starts with imaging the end state then determining what steps to take to achieve the goal. This is how cognitive risk governance works in human-centered risk management.
The role of cognitive risk governance is to design new ways to reduce risks across the organization. In order to reduce risks we must understand why certain risks exist and determine the right reduction in risk we want to achieve. This is why cognitive risk governance is a radical departure from traditional risk management.
In contrast, traditional risk management advocates for a Top Ten list of risks or a Risk Repository that inventories events. Unfortunately, the goal seems to be focused on monitoring risks as opposed to risk reduction. Risks cannot be completely eliminated therefore any “activity-focused” risk program will always find new risks to add to the list. A human-centered risk management program is focused on reducing risks to acceptable levels through design. But not all risks! The focus is on complex risks!
Cognitive risk governance is the process of designing human-centered risk management to address the most complex risks. Any distribution of risk data will tell you that 75-80% of risks are high frequency – low impact risks yet traditional risk programs focus 90% of its energy dealing with the least important risks. The opportunity presented by a cogrisk governance model is to separate risks into appropriate levels of importance. Risks represent a range (distribution) of outcomes therefore one-dimensional approaches to address risks will inevitably not address the full range of complex risks.
Developing a Cognitive Risk Governance Tool Kit
The toolkit for designing cognitive risk governance involves an understanding of a few concepts that any organization can implement.
Cognitive risk governance starts with a clear understanding of the difference in “Uncertainty” and “Risks”. Uncertainty is simply what you do not know or don’t have clear insight into understanding the impacts of its occurrence. Risks are known but it doesn’t mean you fully understand the nature of these risks. I do not subscribe to the semantic exercise of Known-Known, Known-Unknowns, and Unknown-Unknowns. There is no rigor in this exercise nor does in provide new insights into solving problems of importance.
The next concept in a cogrisk governance program involves developing risk intelligence and active defense. Risk intelligence is quantitative and qualitative data from which analysts are better able to develop insights into complex risks. The processes of data management, data analysis, and the formulation of risk intelligence may require a multiple disciplinary team of experts depending on the complexity of the organization and its risk profile.
Active defense, on the other hand, is the process of implementing targeted solutions driven by risk intelligence to capture new opportunities and reduce risk exposures that impede growth. Risk Intelligence and active defense will require solutions and new tools that may not be in use in traditional risk programs. Organizations are generating petabytes of data that are seldom leveraged strategically to manage risk. A cogrisk governance program is responsible for designing risk intelligence and active defense in ways that leverage these stores of data as well as external sources of intelligence.
In traditional risk, the “Three Lines of Defense” model is a common approach used to defend the organization, yet to understand why some change is needed one need only look at how the military is re-engineering its workforce to a 21st century model to address the new battleground being fought with technology and cognition. It is no longer a reasonable assumption to expect an army of people with limited tools to be able to analyze the movement of petabytes of data into, across and outside of an organization with confidence.
The transformation in the military is being led by the Joint Chiefs of Command which is a corollary for Risk, Compliance, Audit, and IT professionals. Risk professionals must lead the change from 19th century risk practice to 21st century human-centered risk management. Existing risk frameworks such as COSO, ISO and Basel have laid a good foundation from which to build but more needs to be done.
I will address these opportunities in more detail in subsequent articles but for now let’s move to the next concept in a CogRisk governance model. The intersection of human-machine interactions has been identified as a critical vulnerability in cyber security. However, poorly designed workstations that require employees to cobble together disparate data and systems to complete work tasks represent inefficiencies that create unanticipated risks in the form of human error.
The intersection of the human-machine interaction represents two significant opportunities in a human-centered risk management program. The first opportunity is an improvement in cybersecurity vulnerability and the second is the capture of more efficient processes in productivity gains and reductions in high frequency, low impact risks. I will defer a discussion on the opportunity to improve cybersecurity to subsequent articles because of the scope of the discussion. However, I do want to mention that a focus on reducing human error risks is unappreciated.
The equation is a simple one but very few organizations ever take the time to calculate the cost of inefficiency even in firms with advanced Six Sigma programs. Here is an oversimplified model: Human error (75%) + Uncontrollable risks (25%) = operational inefficiency (100%). From here it is easy to see the benefit of human-centered risk management. This is obviously a simplified model, including the statistical data, but not one far from reality if you look at empirical cross-industry analysis.
Human-centered risk management focuses on redesigning the causes of human error providing real payback in efficiency and business objectives. A risk program designed to facilitate safe and efficient interactions with technology improves risk management and helps grow business. More on that topic later!
In the next article, I will discuss Intentional Control design and practical use cases for machine learning and artificial intelligence in risk management.
As I have done in previous articles, I invite others to become active participants in helping design a human-centered risk management program and contribute to this effort. If you are a risk professional, auditor, compliance officer, technology vendor or simply an interested party, I hope that you see the benefit of these writings and contribute if you have real-life examples.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
Musings of a Cognitive Risk Manager
Before beginning a discussion on human-centered risk it is important to provide context for why we must consider new ways of thinking about risk. The context is important because the change impacting risk management has happened so rapidly we have hardly noticed. If you are under the age of 25 you take for granted the Internet, as we know it today, and the ubiquitous utility of the World Wide Web. Dial-up modems were the norm and desktop computers with “Windows” were rare except in large companies. Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.
What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks. Is it possible that risks and the processes for measuring risk should remain static? Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future? Why are qualitative self-assessments still a common approach for measuring disparate risks? More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?
The constant is human behavior!
Technology has been successful at helping us get more done when and wherever we need to conduct business. The change brought on by innovation has nearly eliminated the separation of our work and personal lives, as a result, businesses and individuals are now exposed to new risks that are harder to understand and measure. The semi-state of hardened enterprise but soft middle has created a paradox in risk management. The paradox of Robust Yet Fragile. Organizations enjoy robust technological capability to network, partner and conduct business 24/7 yet we are more vulnerable or fragile to massive systemic risks. Why are we more fragile?
The Internet is the prototypical example of a complex system that is “scale-free” with a hub-like core structure that makes it robust to random loss of individual nodes yet fragile to targeted attacks on highly connected nodes or hubs. Likewise, large and small corporations are beginning to look more like diverse forms of complex systems with increased dependency on the Internet as a service model and a distributed network of vendors who provide a variety of services no longer deemed critical or cost effective to perform in house.
Collectively, organizations have leveraged complex systems to respond to customer and stakeholder demands to create value, unwittingly, becoming more exposed to fragility at critical junctures. Systemic fragility has been tested during recent denial of service attacks (DDoS) on critical Internet service providers and recent ransomware attacks both which spread with alarming speed. What changed? After each event risk, professionals breathe a sigh of relief and continue pursuing the same strategies that leave organizations vulnerable to massive failure. The Great Recession of 2009 is yet another example of the fragility of complex systems and a tepid response to systemic risks. Do we mistakenly take survival as a sign of a cure to the symptoms of systemic illness?
After more than 20 years of explosive productivity growth the layering of networked systems now pose some of the greatest risks to future growth and security. Inexplicably, productivity has stalled because humans are becoming the bottleneck in infrastructure. Billions of dollars are currently rushing in to finance the next phase of Internet of Things that will extend our vulnerabilities to devices in our homes, our cars, and eventually more. Is it really possible to fully understand these risks with 19th century risk management?
The dawn of the digital economy has resulted in the democratization of content and the disintermediation of past business models in ways unimaginable 20 years ago. I will spare you the boring science behind the limits of human cognition but let’s just say that if you can’t remember what you had for dinner last Wednesday night you are not alone.
But is that enough reason to change your approach to risk management? Not surprisingly, the answer is Yes! Acknowledging that risk managers need better tools to measure more complex and emerging risks should no longer be considered a weakness. It also means that expecting employees to follow, without fail or assistance, the growing complexity of policies, procedures and IT controls required to deal with a myriad of risks may be unrealistic without better tools. 21st century risk management approaches are needed to respond to the new environment in which we now live.
Over the last 30 years, risk management programs have been built “in response” to risk failures in systems, processes and human error. Human-centered risk management starts with the human and redesigns internal controls to optimize the objectives of the organization while reducing risks. This may sound like a subtle difference but it is, in fact, a radically different approach but not a new one.
Human-factors engineers first met in 1955 in Southern California but [its] contributions to safety across diverse industries is now under-appreciated. We don’t give a second thought to the technology that protects us when we travel in our cars, trucks and airlines or undergo complex medical procedures. These advances in risk management did not happen by accident they were designed into the products and services we enjoy today!
Each of these industries recognized that human error posed the greatest risks to the objectives of their respective organizations. Instead of blaming humans however they sought ways to reduce the complexity that leads to human error and found innovative ways to grow their markets while reducing risks. Imagine designing internal controls that are as intuitive as using a cell phone allowing employees to focus on the job at hand instead of being distracted by multitasking! A human-centered risk program looks at the human-machine interaction to understand how the work environment contributes to risk.
I will return to this concept in subsequent papers to explain how the human-machine interaction contributes to risk. For now, let’s suffice it to say that there is sufficient research and empirical data to support the argument. To further explain a human-centered risk approach we must also understand how decision-making is impacted as a result of 19th century risk practices.
Situational awareness is a critical component of human-centered risk management. One’s perception of events and comprehension of their meaning, the projection of their status after events have changed or new data is introduced, and the ability to predict how change impacts outcomes and expectations with clarity facilitate situational awareness. The opportunity in risk management is to improve situational awareness across the enterprise. Enterprise risks are important but they are not all equal and should not be treated the same. Situational awareness helps senior executives understand the difference.
The challenge in most organizations is that situational awareness is assumed as a byproduct of experience and training and seldom revisited when the work environment changes to absorb new products, processes or technology. The failure to understand this vulnerability in risk perception happens at all levels of the organization from the boardroom down to front-line. The vast majority of change introduced in organizations tend to be minor in nature but accumulate over time contributing to a lack of transparency or Inattentional Blindness impacting situational awareness. This is one of the many reasons organizations are surprised by unanticipated events. We simply cannot see it coming!
Human-centered risk management focuses on designing situational awareness into the work environment from the boardroom down to the shop floor. This multidisciplinary approach requires a new set of tools and cognitive techniques to understand when imperfect information could lead to errors in judgment and decision-making. The principles and processes for designing situational awareness will be discussed in subsequent articles. The goal of human-centered risk management is to design scalable approaches to improve situational awareness across the enterprise.
Human-factors design and situational awareness meet at the “cross roads of technology and the liberal arts” to quote the visionary Steven Jobs. These two factors in human-centered risk management can be achieved by selecting targeted approaches. These approaches will be discussed in more detail in subsequent articles however I invite others to participate in this discussion if you too have an interest in reimagining new approaches to risk management.
James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.
Traditional risk managers have conducted business the same way for most of the last 30 years even as technology has advanced beyond the ability to keep pace. Through each financial crisis risk management has been presented with many opportunities to change but instead resort to the same approach and inevitable outcomes. As competitive pressures grow boards expect executives do more with less pushing risk professionals to adopt creative new ways to add value.
Risks are more complex and systemic in a digital economy with the potential to amplify across disparate vectors critical to business performance. Social media is just one of the many new amplifiers of risks that must be incorporated into enterprise risk programs. Asymmetric risks, like Cyber risk, require a three-dimensional response that includes a deeper understanding of the complexity of the threat and simplicity of execution. The challenge of these more complex risks is even more daunting given the speed of business and distributed nature of data in an interconnected digital economy.
The WannaCrypt cyber attack is just another example of how human behavior has become the key amplifier of risks in a digital economy and an example of how situational awareness is part of the solution. There are many stories and opinions about the events and circumstances of the attack and more details will emerge over time. The truth is that the world got lucky because of the astute actions of one person whose quick actions unintentionally stopped the spread of the virus before broad damage could be done. No one should breathe a sigh of relief because now the attackers are aware of the mistake they made and will, no doubt, correct and learn new ways to exploit weaknesses more effectively. The real question is what did we learn?
The answer is it’s not clear, yet! What is clear is that cyber threats will continue to find ways to exploit the human element requiring new approaches to understand the risk and find new solutions. But I digress….
The purpose of these musings is to introduce the emergence of a cognitive era in risk and propose a path for adopting a human-centered strategy for addressing asymmetric complexity in enterprise risk. The themes I will present in a series of articles will be used to build a case for a supplemental approach in risk that incorporates an understanding of vulnerabilities at the human-machine interaction, human-factor design in internal controls; and, introduce new technologies to enhance performance in managing and reducing human judgment error for complex risks.
Technology has evolved from a tool designed to free up humans from manual work to the development of information networks creating knowledge workers from the boardrooms of Wall Street to the factory floor. The excess capital created by technology is now being reinvested in next generation tools for more advanced uses.
Innovations in machine learning, artificial intelligence and other smart technologies promise even greater opportunity for personal convenience and wealth creation. Risk professionals must begin to understand the methods used in these cognitive support tools in order to evaluate which ones work best to address complex risks. The emergence of smart technology in business applications is growing rapidly however the range of capability and outcomes vary widely for many solutions therefore an understanding of the limitations of each vendor’s predictive powers are important. Contrarily, the rapid advancement of technological innovation has also created a level of complexity that is contributing to the spread of risks in ways that are hard to imagine. It now appears that we are not connecting the dots between the inflection point of technology and human behavior. This is a complex discussion that requires a series of articles to fully unpack.
Risk professionals must begin to understand how human behavior contributes to risk as well as the vulnerabilities at the human – machine interaction. Human error is increasingly cited as the leading cause of risk events in cross industry data such as IT risk, healthcare, automotive, aeronautics and others. [i][ii][iii][iv][v] Unfortunately, risk strategies incorporating human-factors have been widely underrepresented in many risk programs to date. That may be changing! At the core of this change is one constant – humans! Risk professionals who combine “human factors” design with advanced analytical approaches and behavioral risk controls will be better positioned to bring real value to business strategy.
In the world in which we live and breathe, “trust” is developed over repeated interactions between parties with whom a relationship has been built. In the world of the Internet, trust is established much more quickly and subconsciously based on cognitive queues of similarity or credibility that are not always reliable. This apparent conflict of trust paradigm is the Trust Conundrum. The trust conundrum weakness has become the preferred and most successfully executed attack posture for hackers to exploit due to the relative ease of creating trust in the Internet. Cognitive hacks, or also known as; phishing, social engineering or by other names is the biggest threat in cybersecurity as the level of sophistication and variants of these attacks evolve.
Trust in the Internet is not a new or novel topic for those who have followed these trends over many years. In 2003, the University of Pennsylvania’s Lions Center was created to study cyber security, information privacy and trust.  The center was established in 2003 to serve three main purposes: (a) conduct research to detect and remove threats of information misuse to the human society: mitigate risk, reduce uncertainty, and enhance predictability and trust; (b) produce leading scholars in interdisciplinary cyber-security research; and (c) become a national leader in information assurance education. In the same year, the University of Oxford’s Oxford Internet Institute produced a research report titled, “Trust in the Internet: The Social Dynamics of an Experience Technology”. Today’s headlines would suggest that we have much more to learn about trust in the Internet.
After reviewing a variety of studies on the topic of trust in the Internet the general findings conclude that we have a healthy level of skepticism while conducting business in the Internet due to the perceived risks yet we trust the Internet to conduct an ever-expanding list of services. The studies suggest that our use and behavior on the Internet is driven by trust. Generally speaking, the more we use the Internet the more trust we have, a concept called cybertrust. Conversely, we trust (“net confidence”) the Internet more as our use increases exposing us to more threats (“net risks”). This conundrum is partly the reason why cyber attacks continue to grow unabated and demonstrate a huge and growing gap not fully addressed by either cyber security professionals, technology frameworks and standards or policies and procedures designed to mitigate these risks. These studies are dated and much more research on the topic of trust in the Internet is still needed but the initial research provides some insight into the root cause of the problem.
The tension between developing net confidence and the threat of net risks will not be solved in this article. The observation however is that consumer behaviors on the Internet are beginning to change. In a more recent survey posted on the blog of the website of the National Telecommunications & Information Administration (NTIA) for the U.S. Department of Commerce noted, “NTIA’s analysis of recent data shows that Americans are increasingly concerned about online security and privacy at a time when data breaches, cybersecurity incidents, and controversies over the privacy of online services have become more prominent. These concerns are prompting some Americans to limit their online activity, according to data collected for NTIA in July 2015 by the U.S. Census Bureau. This survey included several privacy and security questions, which were asked of more than 41,000 households that reported having at least one Internet user.”
The implications of these and other research suggests that if nothing is done the growth and huge economic benefits of ecommerce may be curtailed over time as “trust” diminishes as a result of increasing threats in cyberspace. The NTIA’s July 2015 survey found, “Nineteen percent of Internet-using households—representing nearly 19 million households—reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior.”
While most organizations have been primarily concerned with developing a defensive posture for internal security of customer data it is becoming increasingly clear that the development of trust will become a critical factor in the expansion of services and uses of the Internet by the government, business and the providers of new technology. Therefore, we are at the beginnings of a crossroads where innovation, growth and security may depend as much on developing trust in the Internet as it does on the features and benefits of products and services provided by the Internet. There are few easy solutions to this problem as demonstrated by the hacking of the DNC and the growth of breaches more broadly. However, given the lack of progress made since the early research into the issue of trust demonstrates that a more comprehensive approach is needed. Joint ventures from academia, industry, government and the military and law enforcement must be forged to address these issues of privacy, security and the open Internet. The window of opportunity may be closing.