Tag Archives: cognitive risk management
James Bone explores cognitive governance, the first pillar of the cognitive risk framework, and the five principles that drive the framework to simplify risk governance, add new rigor to risk assessment and empower every level of the organization with situational awareness to manage risk with the right tools.
The three lines of defense (3LoD), or more specifically, risk governance is being rethought on both sides of the Atlantic.,, A 3LoD model assigns three or more defensive lines of accountability to protect an organization in the same vein as Maginot’s Lines of Defense to defend Verdun. IT security also adopted layered security and controls, but is now evolving to incorporate risk governance approaches. The Maginot Line was considered state of the art for defensive wars fought in trenches, yet vulnerable to offensive change in enemy strategy. Inflexibility in risk practice design and execution is the Achilles’ heel of good risk governance. In order to build risk programs that are responsive to change, we must redesign the solutions we are seeking in risk governance.
A cognitive risk framework clarifies risk governance and provides a pathway for organizations to understand and address risks that matter. There are many reasons 3LoD is perceived to not meet expectations, but a prominent one is unresolved conflicts in perceptions of risk …. the human element.Unresolved conflicts about risk undermine good risk governance, trust and communication.
In Risk Perceptions, Paul Slovic reflected on interpersonal conflicts: “Can an atmosphere of trust and mutual respect be created among opposing parties? How can we design an environment in which effective, multiway communication, constructive debate and compromise take place?”
A cognitive risk framework is designed to find simple solutions to risk management through a focus on empowering the human element. Please keep this perspective in mind as you digest the five principles of cognitive governance.
Principle #1: Risk Governance
Risk governance continues to be a concept that is hard to grasp and elusive to define in concrete terms. Attributes of risk governance such as corporate culture, risk appetite and strategy are assumed outcomes, but what are the right inputs to facilitate these behaviors? Good risk governance is sustainable through simplicity and design. In an attempt to simplify risk governance two inputs are offered: discovery and mitigation.
Risk governance is presented here as two separate and distinct processes:
Risk Assessment (Discovery) and Risk Management (Mitigation)
Risk management is often conflated to include risk assessment, but the skills, tools and responsibility to adequately address these two processes require risk governance to be separate and distinct functions. This may appear to be counterintuitive at first glance, but too narrow a focus on either the mitigation of risk (management) or the discovery of risk (assessment) limit the full spectrum of opportunities to enhance risk governance.
Risk analysis is a continuous process of learning and discovery inclusive of quantitative and qualitative methods that reflect the complexity of risks facing all organizations. Risk analysis should be multidisciplinary in practice, borrowing from a variety of analytical methodologies. For this reason, a specialized team of diverse risk analysts might include data scientists, mathematicians, computer scientists (hackers), network engineers and architects, forensic accountants and other nontraditional disciplines alongside traditional risk professionals. The skill set mix is illustrative, but the design of the team should be driven by senior management to create situational awareness and the tools needed to analyze complex risks. More on this point in future installments.
This approach is not unique or radical. NASA routinely leverages different risk disciplines in preparation for space travel. Wall Street has assimilated physicists from the natural sciences with finance professionals, mathematicians and computer programmers to build risk solutions for their clients and to manage their own risk capital. Examples are plentiful in automotive design, aerospace and other high-risk industries. Success can be designed, but solving complex issues requires human input.
“Risk analysis is a political enterprise as well as a scientific one, and public perceptions of risk play an important role in risk analysis, adding issues of values, process, power and trust to the quantification issues typically considered by risk assessment professionals (Slovic, 1999)”.
Separately, risk management is the responsibility of the board, senior management, audit and compliance. Risk management is equivalent to risk appetite, which is the purview of management to accept or reject. Senior executives are empowered by stakeholders inside and outside the firm to selectively choose among the risks that optimize performance and avoid the risks that hinder. Traditional risk managers are seldom empowered with these dual mandates, and I don’t suggest they should be.
In other words, risk management is the process of selecting among issues of value, power, process and trust in the validation of issues related to risk assessment. To actualize the benefits of sustainable risk governance, advanced risk practice must include expertise in discovery and mitigation. Organizations that develop deep knowledge in both disciplines and master conflicts in perceptions of risk will be better positioned for long-term success.
Experienced risk professionals understand that without the proper tone at the top, even the best risk management programs will fail. Tone at the top implies full engagement by senior executives in the risk management process as laid out in cognitive governance. Developing enhanced risk assessment processes builds confidence in risk-management decisions through greater rigor in risk analysis and recommendations to improve operational efficiency. Risk governance (Principle #1) transforms assurance through perpetual risk-learning.
Principle #2, perceptions of risk, provides an understanding of how to mitigate the conflicts that hurt cognitive governance.
Principle #2: Perceptions of Risk
Risk should be a topic upon which we all agree, but it has become a four-letter word with such divergent meanings that a Google search results in 232 million derivations! The mere mention of climate change, gun control or any number of social or political issues instantly creates a dividing line that is hard, if not impossible, to penetrate. Many of these conflicts are based on deeply held personal and political beliefs that are intractable even in the face of science, data or facts, so how does an organization find common ground?
In discussing this issue with a chief operations officer at a major international bank, I was told, “we thought we understood risk management until the bank almost failed in the 2008 Great Recession.” The truth is, most organizations are reluctant to speak honestly about risks until it is too late or only after a “near miss.” In other words, risk is an abstract concept until we experience it firsthand. As a result, each of us bring our own unique experience of risk into any discussion that involves the possibility of failure. These unresolved conflicts of perceptions of risk create friction in organizations, causing blind spots that expose firms to potential failure, large and small.
But why is perception of risk important?
Each of us bring a different set of personal values and perspectives to the topic of risk. This partly explains why sales people view risks differently than say, accountants; risk is personal and situational to the people and circumstances involved. The vast majority of these conflicting perceptions of risk are well-managed, but many are seldom fully resolved, leading to conflicts that impede performance.
Risk professionals must become attuned to and listen for these conflicts, because they represent signals about risk. Perceptions of risk represent how most people feel about a risk, inclusive of positive or negative outcomes from their own experience. Researchers view risk as probability analysis. Understanding and reconciling these conflicts in perceptions of “risk as feelings” and “risk as analysis” is a low-cost solution that releases the potential for greater performance. Yet the devil in the details can only be fully uncovered through a process of discovery.
Principle #1 (risk governance) acts as a vehicle for learning about risks that enlightens principle #2 (perceptions of risk). Even the most seasoned executive is prone to errors in judgment as complexity grows. However, communications about risk are challenging when we lack agreed-upon procedures to reconcile these conflicts.
Albert Einstein provided a simple explanation:
“Not everything that counts can be counted, and not everything that can be counted counts.”
He knew the difference requires a process that creates an openness to learning.
Principle #1 (risk governance) formalizes continuous learning about risks in order to avoid analysis paralysis in decision-making. Risk governance focuses on building risk intelligence. Principle #2 (perceptions of risk) leverages risk intelligence to fill in the gaps data alone cannot.
Perceptions of risk are complex, because they are seldom expressed through verbal behavior. In other words, how we act under pressure is more powerful than mission statements or even codes of ethics! We say we are safe drivers, but we still text and drive. People take shortcuts when their jobs become too complex, leading to risky behavior. Unknowingly, organizations are incentivizing the wrong behaviors by not fully considering the impacts on human factors.
Surprisingly, cognitive governance means fewer, simple rules instead of more policies and procedures. Risk intelligence narrows the “boil the ocean” approach to risk governance. The vast majority of risk programs spend 85 to 95 percent of 3LoD resources on known risks, leaving the biggest potential exposure, uncertainty, unaddressed.
Again, risk governance is about learning what the organization really values and why.
Organizations must begin to re-design the inputs to risk governance. The common denominator in all organizations is the human element, yet its impact is discounted in risk governance.
Principle #3: Human Element Design
A Ph.D. computer scientist friend from Norway once told me that organizations have a natural rhythm, like a heartbeat, and that cyber criminals understand and leverage this to plan their attacks. Busy, distracted and stressed-out workers are generally more vulnerable to cyberattack. No amount of controls, training, punishment or incentives to prevent phishing attacks or other social engineering schemes is effective in poorly designed work environments, including the C-suite and rank-and-file security professionals.
Cyber criminals understand the human element better than all risk professionals!
Human element design is an innovation in risk governance. Regulators have also begun to include behavioral factors, such as conduct risk, ethics and enhanced governance in regulation, but thus far, the focus is primarily on ensuring good customer outcomes. Sustainable risk governance must consider human factors a tool to increase productivity and reduce risk.,
Human element design is evolving to address correlations and corrective actions in human factors and workplace errors, information security and operational risk.,,,, Principles #1 (risk governance) and #2 (perceptions of risk) assist principle #3 (human element design) in defining areas of opportunity to increase efficient operations and reduce risk in human factors.
Decades of research in human factors in the workplace has led to productivity gains and reductions in operational risk across many industries. We take for granted declining injury rates in the auto and airline industries attributed to human factors design. Simple changes, such as seatbelts and navigation systems in cars and pilot to co-pilot communications during take-offs and landings are just as important — if not more so — as automation and big data projects.
So, why is it important to focus on the human element more broadly now?
The primary reason to focus on the human element now is because technology has become pervasive in everything we do today. Legacy systems, outsourcing, connected devices and networked applications increase complexity and potential risks in the workplace. The internet is built on an engineering concept that is both robust and fragile, meaning users have access to websites around the world, but that access is subject to failure at any connection. Digital transformation extends and expands these new points of fragility, obscuring risk in a cyber void. In the physical world, humans are more aware of risk exposures. In a digital environment, risks are hidden beneath complexity.
Technology has driven productivity gains and prosperity in emerging and developed economies, adding convenience to many parts of our lives; however, cyber risks expose inherent vulnerabilities in cobbled-together systems. Email, social media, third-party partners, mobile devices and now even money move at speeds that increase the possibility for error and reduce our ability to “see” risk exposures that manifest within and beyond our perceptions of risk.
Developers and users of technology must begin to understand how the design and implementation of digital transformation create risk exposures. A “rush to market” mindset has put security on the back burner, leaving users on their own to figure it out instead of making security a market differentiator. Technology developers must begin to collaborate on how security can be made more intuitive for users and tech support. Tech SROs (self-regulatory organizations) are needed to stay ahead of bad actors and government regulation. Users must also understand the limits of technology to solve challenges by building in accommodations for how people work together, share and complete specific tasks.
Instead of adopting simple issues like the insider threat that pale in comparison to the larger issue of the human element, we miss the forest for the blades of grass. The first two principles are designed to support improvements in the human element, but a new risk practice must be developed with the end goals of simplicity, security and efficient operations as products of risk governance.
I will address cognitive hacks separately; these are some of the most sophisticated threats in risk governance and require special treatment.
The human element principle is a focus on designing solutions that address cognitive load, build situational awareness and manage risks at the intersection of the human-to-human and human-to-machine interaction.,, Apple, Amazon, Twitter and others have learned that simplicity works to promote human creativity for growth. Information security and risk governance must become intuitive and seamless to empower the human element.
This topic will be revisited in intentional design, the second pillar, but for now, let’s suffice it to say that a focus on the human element will create a multiplier effect in terms of productivity, growth, new products and services that do not exist today. Each of the five principles are a call to action to think more broadly about risks today and the future.
For now, let’s move on to principle #4, intelligence and modeling.
Principle #4: Intelligence & Modeling
“All models are wrong, but some are useful”
– George Box, Statistician
Box’s warning referred to the inclination to present excessively elaborate models as more correct than simple models. In fact, the opposite is true: Simple approximations of reality may be more useful (e.g., E=MC2). More importantly, Box further warned modelers to understand what is wrong in the model. “It is inappropriate to be concerned about mice when there are tigers abroad (Box 1978).” Expanding on Box’s sentiment, I would add that useful models are not static and may become less useful during a change in circumstances or as new information is presented.
For example, risk matrices have become widely adopted in risk practice and, more recently, in cybersecurity. A risk matrix is a simple tool to rank risks when users do not have the skill or time to perform more in-depth statistical analysis. Unfortunately, risk matrices have been misused by GRC consultants and risk practitioners, creating a false sense of assurance among senior executives. Good risk governance demands more rigor than simple risk matrices.
First, I want to be clear that the business intelligence and data modeling principle is not proposed as a big data project. Big data projects have gotten a bad rap, with conflicting examples of hype about the benefits, as well as humbling outcomes as measured in project success. Principle #4 is about developing structured data governance in order to improve business intelligence for better performance.
Let me give you a simple example: In 2007, prior to the start of the Great Recession, mutual funds had used limited amounts of derivatives to manage risk and boast returns. Wall Street began to increase leverage using derivatives to gain advantage; however, firms relied on manual processes and were unable to easily quantify increased exposure to counterparty risk. A simple question like “what is my total exposure?” took weeks — if not months — to gather and did not include comprehensive answers about impacts to fund performance if specific risk scenarios occurred. We know what happened in 2008, and many of those risks materialized without the risk mitigation needed to offset downside exposure.
Without getting too wonky, manual operational processes for managing collateral and heavy use of spreadsheets and paper contracts slowed the response rate to answer these questions and minimize risk in a more timely manner. Organizations need to understand the strategic questions that matter and create the ability to answer them in minutes, not months. Good risk governance proactively defines strategic questions and refines them as information changes the firm’s risk profile.
Business intelligence and data modeling is an iterative process of experimentation to ask important strategic questions and learn what really matters. I separated the two skill sets because the disciplines are different and the capabilities are specific to each organization., The key point of the intelligence and modeling principle is to incorporate a commitment in risk governance to business intelligence and data modeling, along with the patience to develop the skills needed to support business strategy.
Principle #4 should be designed to better understand business performance, reduce inefficiencies, evaluate security and manage the risks critical to strategy. This is a good place to transition to principle #5, capital structure.
Principle #5: Capital Structure
A firm’s capital structure is one of the key building blocks for long-term success for any viable business, but too often, even well-established organizations stumble (and many fail) for reasons that seem inexplicable.The CFO is often elevated to assume the role of risk manager, and in many firms, staff responsible for risk management report to a CFO; however, upon further analysis, the tools used by CFOs may be too narrow to manage the myriad risks that lead to business failure.
Finance students are well-versed in weighted average cost of capital calculations to achieve the right debt-to-equity mix. Organizations have become adept at managing cash flows, sales, strategy and production during stable market conditions. But how do we explain why so many firms appear to be caught flat-footed during rapid economic change and market disruption? Why is Amazon frequently blamed for causing a “retail apocalypse” in several industries? The true cause may be a pattern of inattentional blindness.
Inattentional blindness is when an individual [or organization] fails to perceive an unexpected stimulus in plain sight. When it becomes impossible to attend to all the stimuli in a given situation, a temporary “blindness” effect can occur, as individuals fail to see unexpected (but often salient) objects or stimuli. In a Harvard Business Review article, “When Good Companies Go Bad,” Donald Sull, Senior Lecturer at the MIT Sloan School, and author Kathleen M. Eisenhardt explain that active inertia is an organization’s tendency to follow established patterns of behavior — even in response to dramatic environmental shifts.
Success reinforces patterns of behavior that become intractable until disruption in the market. According to Sull,
“Organizations get stuck in the modes of thinking that brought success in the past. As market leaders, management simply accelerates all their tried-and-true activities. In trying to dig themselves out of a hole, they just deepen it.”
This may explain why firms spiral into failure, but it doesn’t explain why organizations miss the emergence of competitors or a change in the market in the first place.
Inattentional blindness occurs when firms ignore or fail to develop formal processes that proactively monitor market dynamics for threats to their leadership. Sull and Eisenhardt’s analysis is partially correct in that when firms react, the response is typically half-baked, resulting in damage to capital — or worse, a race to the bottom.
Interestingly, Sull also suggests that an organization’s inability to change extends to legacy relationships with customers, vendors, employees, suppliers and others, creating “shackles” that reinforce the inability to change. Contractual agreements memorialize these relationships and financial obligations, but are rarely revisited after the deals have been completed. Contracts are risk-transfer tools, but indemnification language may be subject to different state laws. How many firms truly understand the risk exposure and financial obligations in legacy contractual agreements? How many firms understand the root cause of financial leakage in contractual language?
Insurance companies are scrambling to mitigate cyber insurance accumulation risks embedded in legacy indemnification agreements.,These hidden risks manifest because organizations lack formal processes to adequately assess legacy obligations, creating inattentional blindness to novel risks. Digital transformation will only accelerate accumulation risks in digital assets.
To summarize, the tools to manage capital do not stop with managing the cost of capital, cash flows and financial obligations. Capital can be put at risk by unanticipated blind spots in which risks and uncertainty are viewed too narrowly.
The first pillar, cognitive governance, is the driver of the next four pillars. The five pillars of a cognitive risk framework represent a new maturity level in enterprise risk management, which I propose to broaden the view of risk governance and build resilience to evolving threats. It is anticipated that more advanced cognitive risk frameworks will be developed by others (including myself) over time.
The treatment of the remaining four pillars will be shorter and focused on mitigating the issues and risks described in cognitive governance. Intentional design is the next pillar to be introduced.
Introducing the Human Element to Risk Management
As posted in Corporate Compliance Insights
As we move into the 4th Industrial Revolution (4IR), risk management is poised to undergo a significant shift. James Bone asks whether traditional risk management is keeping pace. (Hint: it’s not.) What’s really needed is a new approach to thinking about risks.
Framing the Problem
Generally speaking, organizations have one foot firmly planted in the 19th century and the other foot racing toward the future. The World Economic Forum calls this time in history the 4th Industrial Revolution, a $100 trillion opportunity, that represents the next generation of connected devices and autonomous systems needed to fuel a new leg of growth. Every revolution creates disruption, and this one will be no exception, including how risks are managed.
The digital transformation underway is rewriting the rules of engagement., The adoption of digital strategies implies disaggregation of business processes to third-party providers, vendors and data aggregators who collectively increase organizational exposure to potential failure in security and business continuity. Reliance on third parties and sub-vendors extends the distance between customers and service providers, creating a “boundaryless” security environment. Traditional concepts of resiliency are challenged when what is considered a perimeter is as fluid as the disparate service providers cobbled together to serve different purposes. A single service provider may be robust in isolation, but may become fragile during a crisis in connected networks.
Digital transformation is, by design, the act of breaking down boundaries in order to reduce the “friction” of doing business. Automation is enabling speed, efficiency and multilayered products and services, all driven by higher computing power at lower prices. Digital Unicorns, evolving as 10- to 20-year “overnight success stories” give the impression of endless opportunity, and capital returns from early-stage tech firms continue to drive rapid expansion in diverse digital strategies.
Thus far, these risks have been fairly well-managed, with notable exceptions.
Given this rapid change, it is reasonable to ask if risk management is keeping pace as well. A simple case study may clarify the point and raise new questions.
In 2016, the U.S. presidential election ushered in a new risk, a massive cognitive hack. Researchers at Dartmouth University’s Thayer School of Engineering developed the theory of cognitive hacking in 2003, although the technique has been around since the beginning of the internet.
Cognitive hacks are designed to change the behavior and perception of the target of the attack. The use of a computer is optional in a cognitive hack. These hacks have been called phishing or social engineering attacks, but these terms don’t fully explain the diversity of methods involved. Cognitive hacks are cheap, effective and used by nation states and amateurs alike. Generally speaking, “deception” – in defense or offense – on the internet is the least expensive and most effective approach to bypass or enhance security, because humans are the softest target.
In “Cognitive Hack”, one chapter entitled “How to Hack an Election” describes how cognitive hacks have been used in political campaigns around the world to great effect. It is not surprising that it eventually made its way into American politics. The key point is that deception is a real risk that is growing in sophistication and effectiveness.
In researching why information security risks continue to escalate, it became clear that a new framework for assessing risks in a digital environment required a radically new approach to thinking about risks. The escalation of cyber threats against an onslaught of security spending and resources is called the “cyber paradox.” We now know the root cause is the human-machine interaction, but sustainable solutions have been evasive.
Here is what we know…… [Digital] risks thrive in diverse human behavior!
Some behaviors are predictable, but evolve over time. Security methods that focus on behavioral analytics and defense have found success, but are too reactive to provide assurance. One interesting finding noted that a focus on simplicity and good work relations plays a more effective role than technology solutions. A recent 2019 study of cyber resilience found that “infrastructure complexity was a net contributor to risks, while the human elements of role alignment, collaboration, problem resolution and mature leadership played key roles in building cyber resilience.”
In studying the phenomena of how the human element contributes to risk, it became clear that risk professionals in the physical sciences were using these same insights of human behavior and cognition to mitigate risks to personal safety and enable better human performance.
Diverse industries, such as, air travel, automotive, health care, tech and many others have benefited from human element design to improve safety and create sustainable business models. However, the crime-as-a-service (CaaS) model may be the best example of how organized criminals in the dark web work together with the best architects of CaaS products and services, making billions selling to a growing market of buyers.
The International Telecommunications Union (ITU), in publishing its second Global Cybersecurity Index (GCI), noted that approximately 38 percent of countries have a cybersecurity strategy, and 12 percent of countries are considering a strategy to cybersecurity.
The agency said more effort is needed in this critical area, particularly since it conveys that governments consider digital risks high priority. “Cybersecurity is an ecosystem where laws, organizations, skills, cooperation and technical implementation need to be in harmony to be most effective,” stated the report, adding that cybersecurity is “becoming more and more relevant in the minds of countries’ decision-makers.”
Ironically, social networks in the dark web have proven to be more robust than billions in technology spending.
The formation of systemic risks in a broader digital economy will be defined by how well security professionals bridge 19th-century vulnerabilities with next-century business models. Automation will enable the transition, but human behavior will determine the success or failure of the 4th Industrial Revolution.
A broader set of solutions are beyond the scope of this paper, but it will take a coordinated approach to make real progress.
The common denominator in all organizations is the human element, but we lack a formal approach to assess the transition from 19th-century approaches to this new digital environment. Not surprisingly, I am not the first, nor the last to consider the human element in cybersecurity, but I am convinced that the solutions are not purely prescriptive in nature, given the complexity of human behavior.
The assumption is that humans will simply come along like they have so often in the past. Digital transformation will require a more thoughtful and nuanced approach to the human-machine interaction in a boundaryless security environment.
Cognitive hackers from the CIA, NSA and FBI agree that addressing the human element is the most effective approach. A cognitive risk framework is designed to address the human element and enterprise risk management in broader ways than changing employee behavior. A cognitive risk framework is a fundamental shift in thinking about risk management and risk assessment and is ideally suited for the digital economy.
Technology is creating a profound change in how business is conducted. The fragility in these new relationships is concentrated at the human-machine interaction. Email is just one of dozens of iterations of vulnerable endpoints inside and outside of organizations. Advanced analytics will play a critical role in security, but organizational situational awareness will require broader insights.
Recent examples include the 2017 distributed denial of service attack (DDoS) on Dyn, an internet infrastructure company who provides domain name service (DNS) to its customers. A single service provider created unanticipated systemic risks across the East Coast.
DNS provides access to the IP address you plug into your browser.,  A DDoS attack on a DNS provider prevents access to websites. Much of the East Coast was in a panic as the attack slowly spread. This is what happened to Amazon AWS, Twitter, Spotify, GitHub, Etsy, Vox, PayPal, Starbucks, Airbnb, Netflix and Reddit.
These risks are known, but they require complex arrangements that take time. These visible examples of bottlenecks in the network offer opportunity to reduce fragility in the internet; however, resilience on the internet will require trusted partnerships to build robust networks beyond individual relationships.
The collaborative development of the internet is the best example of complete autonomy, robustness and fragility. The 4th Industrial Revolution will require cooperation on security, risk mitigation and shared utilities that benefit the next leg of infrastructure.
Unfortunately, systemic risks are already forming that may threaten free trade in technology as nations begin to plan for and impose restrictions to internet access. A recent Bloomberg article lays bare the global divisions forming regionally as countries rethink an open internet amid political and security concerns.
So, why do we need a cognitive risk framework?
Cognitive risk management is a multidisciplinary focus on human behavior and the factors that enhance or distract from good outcomes. Existing risk frameworks tend to consider the downside of human behavior, but human behavior is not one-dimensional, and neither are the solutions. Paradoxically, cybercriminals are expert at exploiting trust in a digital environment and use a variety of methods [cognitive hacks] to change behavior in order to circumvent information security controls.
A simple answer to why is that cognitive risks are pervasive in all organizations, but too often are ignored until too late or not understood in the context of organizational performance. Cognitive risks are diverse and range from a toxic work environment, workplace bias and decision bias to strategic and organizational failure., ,  More recent research is starting to paint a more vivid picture of the role of human error in the workplace, but much of this research is largely ignored in existing risk practice., , , ,  A cognitive risk framework is needed to address the most challenging risks we face … the human mind!
A cognitive risk framework works just like digital transformation: by breaking down the organizational boundaries that prevent optimal performance and risk reduction.
Redesigning Risk Management for the 4th Industrial Revolution!
The Cognitive Risk Framework for Cybersecurity and Enterprise Risk Management is a first attempt at developing a fluid set of pillars and practices to complement COSO ERM, ISO 31000, NIST and other risk frameworks with the human at the center. Each of the Five Pillars will be explored as a new model for resilience in the era of digital transformation.
It is time to humanize risk management!
A cognitive risk framework has five pillars. Subsequent articles will break down each of the five pillars to demonstrate how each pillar supports the other as the organization develops a more resilient approach to risk management.
The Five Pillars of a Cognitive Risk Framework include:
I. Cognitive Governance
II. Intentional Design
III. Risk Intelligence & Active Defense
IV. Cognitive Security/Human Elements
V. Decision Support (situational awareness)
Lastly, as part of the roll out of a cognitive risk framework, I am conducting research at Columbia University’s School of Professional Studies to better understand advances in risk practice beyond existing risk frameworks. My goal, with your help, is to better understand how risk management practice is evolving across as many risk disciplines as possible. Participants in the survey will be given free access to the final report. An executive summary will be published with the findings. Contact me at firstname.lastname@example.org. Emails will be used only for the purpose of distributing the survey and its findings.
*Correction: The reference to Level 3 Communication experiencing a cyberattack was reported incorrectly. The reference to Level 3 is related to a 2013 outage due to a “failing fiber optic switch” not a cyberattack. Apologies for the incorrect attribution. The purpose of the reference is related to systemic risks in the Internet. James Bone
James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance Week, Corporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.
He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.
Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?
James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.
The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.
Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?
Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?
The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.
Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?
Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.
On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.
The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.
Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?
Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.
The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.
Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.
Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.
“Intelligent Automation” is such a new term that you won’t find it in Wikipedia or Merriam-Webster. However, we are clearly in the early stages of a technological transformation that’s no less dramatic than the one spurred by the emergence of the Internet.
A new age in quantitative and empirical methods will change how businesses operate as well as the role of traditional finance professionals. To compete in this environment, finance teams must be willing to adopt new operating models that reduce costs and improve performance through better data. In short, a new framework is needed for designing an “intelligent organization.”
The convergence of technology and cognitive science provides finance professionals with powerful new tools to tackle complex problems with more certainty. Advanced analytics and automation will increasingly play bigger roles as tactical solutions to drive efficiency or to help executives solve complex problems.
But the real opportunities lie in reimaging the enterprise as intelligent organization — one designed to create situational awareness with tools capable of analyzing disparate data in real or near-real time.
Automation of redundant processes is only the first step. An intelligent organization strategically designs automation to connect disparate systems (e.g., data sources) by enabling users with tools to quickly respond or adjust to threats and opportunities in the business.
Situational awareness is the product of this design. In order to push decision-making deeper into the organization, line staff need the tools and information to respond to change in the business and the flexibility to adjust and mitigate problems within prescribed limits. Likewise, senior executives need near-real time data that provides the means to query performance across different lines of business with confidence and anticipate impacts to singular or enterprise events in order to avoid costly mistakes.
Financial reporting is becoming increasingly complex at the same time finance professionals are being challenged to manage emerging risks, reduce costs, and add value to strategic objectives. These competing mandates require new support tools that deliver intelligence and inspire greater confidence in the numbers.
Thankfully, a range of new automation tools is now available to help finance professionals achieve better outcomes against this dual mandate. However, to be successful finance executives need a new cognitive framework that anticipates the needs of staff and provides access to the right data in a resilient manner.
This cognitive framework provides finance with a design road map that includes human elements focused on how staff uses technology and simplifying the rollout and implementation of advanced analytical tools.
The framework is composed of five pillars, each designed to complement the others in the implementation of intelligent automation and the development of an intelligent organization:
- Cognitive governance
- Intentional control design
- Business intelligence
- Performance management
- Situational awareness
Cognitive governance is the driver of intelligent automation as a strategic tool in guiding organizational outcomes. The goal of cognitive governance, as the name implies, is to facilitate the design of intelligent automation to create actionable business intelligence, improve decision-making, and reduce manual processes that lead to poor or uncertain outcomes.
In other words, cognitive governance systematically identifies “blind spots” across the firm then directs intelligent automation to reduce or eliminate the blind spots.
The end game is to create situational awareness at multiple levels of the organization with better tools to understand risks, errors in judgment, and inefficient processes. Human error as a result of decision-making under uncertainty is increasingly recognized as the greatest risk to organizational success. Therefore, it is crucial for senior management create a systemic framework for reducing blind spots in a timely manner. Cognitive governance sets the tone and direction for the other four pillars.
Intentional control design, business intelligence, and performance management are tools for creating situational awareness in response to cognitive governance mandates. A cognitive framework does not require huge investments in the latest big data “shiny objects.” It’s not necessary to spend millions on machine learning or other forms of artificial intelligence. Alternative automation tools for simplifying operations are readily available today, as is access to advanced analytics, for organizations large and small, from a variety of cloud services.
However, for firms that want to use machine learning/AI, a cognitive framework easily integrates any widely used tool or regulatory risk framework. A cognitive framework is focused on a factor that others ignore: how humans interact with and use technology to get their work done most effectively.
Network complexity has been identified as a strategic bottleneck in response times for dealing with cybersecurity risks, cost of technology, and inflexibility in fast-paced business environments. Without a proper framework, improperly designed automation processes may simply add to infrastructure complexity.
There is also a dark side to machine learning/AI that organizations must understand in order to anticipate best use cases and avoid the inevitable missteps that will come with autonomous systems. Microsoft learned a hard lesson with “Clippy,” its Chatbot project, which was shelved when users taught the bot racist remarks. While there are many uses for AI, this technology is still in an experimental stage of growth.
Overly complicated approaches to intelligent automation are the leading cause of failed big data projects. Simplicity is the new value proposition that should be expected from the implementation of technology solutions. Intelligent automation is one tool to accomplish that goal, but execution requires a framework that understands how people use new technology effectively.
Simplicity must be a strategic design imperative based on a framework for creating situational awareness across the enterprise.
James Bone is a cognitive risk consultant; a lecturer at Columbia University’s School of Professional Studies; founder of TheGRCBlueBook.com, an online directory of governance, risk, and compliance tools; and author of, “Cognitive Hack: The New Battleground in Cybersecurity … the Human Mind.”
To see the post in CFO magazine click the link above
You must be logged in to view this document. Click here to login
TheGRCBlueBook combines risk advisory services with cutting edge research, a knowledge of the GRC marketplace and a platform for GRC solutions providers to educate and showcase their products and services to a global market for risk, audit, compliance and IT professionals seeking cost effective solutions to manage a variety of risks. Partner with TheGRCBlueBook to help educate corporate buyers about your GRC products and services.
You must be logged in to view this document. Click here to login
Our [KPMG] work as audit professionals is fundamentally about “trust.” For the capital markets to operate effectively and to the benefit of investors and society more broadly, there must be integrity and confidence in the system. In serving the capital markets and the public interest, we work to help instill trust and confidence in the information used to make important decisions.
In the following pages, we begin to explore how we can continue to promote trust during a time of profound change across the business landscape. Given the explosion of data and the digitization of our lives, we want to promote a discussion about how the audit profession must evolve its tools and approach to keep up with the pace of change and remain relevant in a dynamic marketplace. Specifically, our profession must embrace the use of advanced technologies, including data and analytics (D&A), robotics, automation and cognitive intelligence, to manage processes, support planning and inform decision making. At KPMG we are constantly thinking about the development of innovative capabilities and technologies that will enhance quality and strengthen the relevance of our audit into the future.
In part I of Cognitive Risk Framework for Cybersecurity, I introduced the reasoning for developing a bridge from existing IT and risk frameworks to the next generation of risk management based on cognitive. These concepts are no longer theoretical and, in fact, are evolving faster than most IT security and risk professionals appreciate. In part II, I introduce the pillars of a cognitive risk framework for cybersecurity that make this program operational. The pillars represent existing technology and concepts that are increasingly being adopted by technology firms, government agencies, computer scientists and industries as diverse as healthcare, biotechnology, financial services and many others.
The following is an abbreviated version of the cognitive risk framework for cybersecurity that will be published later this year.
A cognitive risk framework is fundamental to the integration of existing internal controls, risk management practice, cognitive security technology and the people who are responsible for executing on the program components that make up enterprise risk management. Cognitive risk fills the missing gap in today cybersecurity program that fails to fully incorporate how to address the “softest target”, the human mind.
A functioning cognitive risk framework for cybersecurity provides guidance for the development of a CogSec response that is three-dimensional instead of a one-dimensional defensive posture. Further, cognitive risk requires an expanded taxonomy to level set expectations about risk management through scientific methods and improve communications about risks. A CRFC is an evolutionary step from intuition and hunches to quantitative analysis and measurement of risks. The first step in the transition to a CRFC is to develop an organizational Cognitive Map. Paul Slovic’s Perception of Risk research is a guide for starting the process to understand how decision-makers across an organization perceive key risks in order to prioritize actionable steps for a range of events large and small. A Cognitive Map is one of many tools risk professionals must use to expand discussions on risk and form agreements for enhanced techniques in cybersecurity.
Risk communications sound very simple on the surface but even risk experts will refer to risks and use the term with different meanings without recognizing the contradictions. In speaking with senior executives at a major bank I was told that she thought the understood risks but the 2008 Great Recession revealed major disagreements in how the firm talked about risk and the decisions made to manage risk. Poor communications about risk are more common than not without a structured way to put risks in context to account for a diversity of risk perceptions. “The fact that the word “risk” has so many different meanings often causes problems in communication, ” according to Slovic.
Organizations rarely openly discuss these differences or even understand they exist until a major risk event forces these issues onto the table. Even then the focus of the discussion quickly pivots to solving the problem with short-term solutions leaving the underlying conflicts unresolved. Slovic, Peters, Finucane and MacGregor (2005) posited that “risk is perceived and acted on it two ways: Risk as Feelings refers to individuals’ fast, instinctive, and intuitive reactions to danger. Risk as Analysis brings logic, reason, and scientific deliberation to bear on risk management.”
Some refer to this exercise as forming a “risk appetite” but again this term is vague and doesn’t fully develop a full range of ways individuals experience risk. Researchers now recognize diverse views of risks as relevant from the nonscientist who views risks subjectively to scientists who evaluate adverse events as the probability and consequences of risks. A deeper view into risk perceptions explains why there is little consensus on the role of risk management and dissatisfaction when expectations are not met.
Techniques for reconciling these differences create a forum that leads to better discussions about risk. Discussions about risk management are extremely important to organizational success yet paradoxically produce discomfort whether in personal or business life when planning for the future. Personal experience in conjunction with a body of research demonstrates that the topic of risk tends to elicit a strong emotional response. Kahneman and Tversky called this response “loss aversion”. “Numerous studies have shown that people feel losses more deeply than gains of the same value (Kahneman and Tversky 1979, Tversky and Kahneman 1991).” Losses have a powerful psychological impact that lingers long after the fact coloring one’s perception about risk taking.
Over time these perceptions about risk and loss become embedded in the unconscious and by virtue of the vagaries of memory the facts and circumstances fade. The natural bias to avoid loss leads us to a fallacy that assumes losses are avoidable if people simply make the right choices. This common view of risk awareness fails to account for uncertainty, the leading cause of surprise, when expectations are not met. This fallacy of perceived risks produces an underestimation or overestimation of the probability of success or failure.
A Cognitive Risk Framework for Cybersecurity, or any other risk, requires a clear understanding and agreement on the role(s) of data management; risk and decision support analytics, parameters for dealing with uncertainty (imperfect information), and how technology is integrated to facilitate the expansion of what Herbert A. Simon called “bounded rationality”. Building a CRFC does not eliminate risks it develops a new kind of intelligence about risk.
The goal of a cognitive risk framework is needed to advance risk management in the same way economists deconstructed the “rational man” theory. The myth of “homo economicus” still lingers in risk management damaging the credibility of the profession. “Homo economicus, economic man, is a concept in many economic theories portraying humans as consistently rational and narrowly self-interested who usually pursue their subjectively defined ends optimally”.[i] These concepts have since been contrasted with Simon’s bounded rationality; not to mention any number of financial market failures and unethical and fraudulent behavior that stands as evidence to the weakness in the argument. A cognitive risk framework will serve to broaden awareness in the science of cognitive hacks as well as the factors that limit our ability to effectively deal with the Cyber Paradox that go beyond selecting defensive strategy. Let’s take a closer look at what a cognitive risk framework for cybersecurity looks like and consider how to operationalize the program.
The foundational base (“Guiding Principles”) for developing a cognitive risk framework for cybersecurity starts with Slovic’s “Cognitive Map – Perceptions of Risk” and an orientation in Simon’s “Bounded Rationality” and Kahneman and Tversky’s “Prospect Theory – An Analysis of Decision Making Under Risk”. In other words, a cognitive risk framework formally develops a structure for actualizing the two ways people fundamentally perceive adverse events; “risk as feelings” and “risk as analysis”. Each of the following guiding principles is a foundational building block for a more rigorous science-based approach to risk management.
The CRFC guiding principles expand the language of risk with concepts from behavioral science to build a bridge connecting decision science, technology and risk management. The CRFC guiding principles establish a link and recognize the important work undertaken by the COSO Enterprise Risk Framework for Internal Controls, ISO 31000 Risk Management Framework, NIST and ISO/IEC 27001 Information Security standards; which make reference to the need for processes to deal with the human element. The opportunity to extend the cognitive risk framework to other risk programs exists however the focus of this topic is directed on cybersecurity and the program components needed to operationalize its execution. The CRFC program components include five pillars: 1) Intentional Controls Design; 2) Cognitive Informatics Security (Security Informatics); 3) Cognitive Risk Governance; 4) Cybersecurity Intelligence & Active Defense Strategies; and, 5) Legal “Best Efforts” Considerations in Cyberspace.
Brief overview of the Five Pillars of a CRFC:
Intentional Controls Design
Intentional controls design recognizes the importance of trust in networked information systems by advocating for the automation of internal controls design integration for IT, operational, audit and compliance controls. Intentional controls design is the process of embedding information security controls, active monitoring, audit reporting, risk management assessment and operational policy and procedure controls into network information systems through user-guided GUI application design and data repository to enable machine learning, artificial intelligence and other currently available smart system methods.
Intentional controls design is an explicit choice made by information security analysts to reduce or remove reliance on people through the use of automated controls. Automated controls must be animated through the used of machine learning, artificial intelligence algorithms, and other automation based on regulatory guidance and internal policy. Intentional controls design is implemented on two levels of hierarchy: 1) Enterprise level intentional controls design anticipates that these controls are mandatory across the organization and can only be changed or modified by senior executive approval responsible for enterprise governance; 2) Operational level intentional controls design anticipates that each division or business unit may require unique control design to account for lines of business difference in regulatory regimes, risk profile, vendor relationships and other unique to these operations.
Cognitive Informatics Security (Security Informatics)
Cognitive informatics security is a rapidly evolving discipline within cybersecurity and healthcare with many branches of discipline making it difficult to come up with one definition. Think of cognitive security as an overarching strategy for cybersecurity executed through a variety of advanced computing methodologies.
“Cognitive computing has the ability to tap into and make sense of security data that has previously been dark to an organization’s defenses, enabling security analysts to gain new insights and respond to threats with greater confidence at scale and speed. Cognitive systems are taught, not programmed, using the same types of unstructured information that security analysts rely on.”[i]
The International Journal of Cognitive Informatics and Natural Intelligence defines cognitive informatics as, “ a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain.”[ii]
Cyber Risk Governance
The Cyber Risk Governance pillar is concerned with the role of the Board of Directors and senior management in strategic planning and executive sponsorship of cybersecurity. Boards of director historically delegate risk and compliance reporting to the Audit Committee although a few forward thinking firms have appointed a senior risk executive who reports directly to the BoD. In order to implement a Cognitive Risk Framework for Cybersecurity the entire board must participate in an orientation of the guiding principles to set the stage and tone for the transformation required to incorporate cognition into a security program.
The framework represents a transformational change in risk management, cybersecurity defense and an understanding of decision-making under uncertainty. To date, traditional risk management has lacked scientific rigor through quantitative analysis and predictive science. The framework dispels myths about risk management while aligning the practice of security and risk management using the best science and technology available today and the future.
Transformational change from an old to a new framework requires leadership from the board and senior management that goes beyond the sponsorship of a few new initiatives. The framework represents a fundamentally new vision for what is possible in risk and security to address cybersecurity or enterprise risk management. Change is challenging for most organizations however the transformation required to move to a new level of cognition may be the hardest, but most effective, any firm will ever undertake. This is exactly why the board and senior management must understand the framing of decision-making and the psychology of choice. Why, you may ask, must senior management understand what one does naturally and intuitively? The answer is that change is a choice and the process of decision-making among a set of options is not as intuitive or simple as one thinks.
Cybersecurity Intelligence and Defense Strategies
“Information on its own maybe of utility to the commander, but when related to other information about the operational environment and considered in the light of past experience, it gives rise to a new understanding of the information, which may be termed “intelligence.”[i]
The Cybersecurity Intelligence and Defense Strategies (CIDS) pillar is based on the principles of the 17-member Defense Intelligence and Intelligence community “Joint Intelligence” report. Cybersecurity intelligence is conducted to develop information on four levels – Strategic, Operational, Tactical & Asymmetrical. Strategic intelligence should be developed for the board of directors, senior management and the Cyber Risk Governance committee. Operational intelligence should be designed to provide security professionals with an understanding of threats and operational environment vulnerabilities. Tactical intelligence must provide directional guidance for offensive and defensive security strategies. Asymmetrical intelligence strategies include monitoring the cyber black market and other market intelligence from law enforcement and other means as possible.
CIDS also acts as the laboratory for cybersecurity intelligence responsible for leading the human and technology security practice through a data dependent format to provide rapid response capabilities. Information gathering is the process of providing organizational leadership with context for improved decision-making for current and forward-looking objectives that are key to operational success or to avoid operational failure. Converting information into intelligence requires an organization to develop formal processes, capabilities, analysis, monitoring, and communication channels that enhance its ability to respond appropriately and in a timely manner. Intelligence gathering assumes that the organization has in place objectives for cybersecurity that are well defined through plans of execution and possesses capabilities to respond accordingly to countermeasures (surprise) as well as expected outcomes.
Legal “Best Efforts” Considerations in Cyberspace
To say that the legal community is struggling with how to address cyberrisks is an understatement on the one hand addressing the protection of their own client’s data and on the other hand determining negligence in an global environment where no organization can ensure against a data breach with 100% certainty. “The ABA Cybersecurity Legal Task Force, chaired by Judy Miller and Harvey Rishikof, is hard at work on the Cyber and Data Security Handbook. The Cyber Incident Response Handbook, which originated with the Task Force.”[i] Law firms have the same challenges as all other organizations but also have a higher standard in their ethical rules that require confidentiality of attorney-client and work product data. I looked to the guidance provided by the ABA to frame the fifth pillar of the CRFC.
The concept of “best efforts” is a contractual term used to obligate the parties to make their best attempt to accomplish a goal, typically used when there is uncertainty about the ability to meet a goal. “Courts have not required that a party under a duty to use best efforts to accomplish a given goal make every available effort to do so, regardless of the harm to it. Some courts have held that the appropriate standard is one of good faith. Black’s Law Dictionary 701 (7th ed. 1999) has defined good faith as “A state of mind consisting in (1) honesty in belief or purpose, (2) faithfulness to one’s duty or obligation, (3) observance of reasonable commercial standards of fair dealing in a given trade or business, or (4) absence of intent to defraud or to seek unconscionable advantage””.[ii]
Boards of director and senior executives are held to these standards by contractual agreement whether aware of these standards or not in the event a breach occurs. The ABA has adopted a security program guide by the Carnegie Mellon University’s Software Engineering Institute. The Carnegie Mellon Enterprise Security Program (ESP) has been tailored for law firms as a prescriptive set of security related activities as well as incident response and ethical considerations. The Carnegie Mellow ESP spells out “some basic activities must be undertaken to establish a security program, no matter which best practice a firm decides to follow. (Note that they are all harmonized and can be adjusted for small firms.) Technical staff will manage most of these activities, but firm partners and staff need to provide critical input. Firm management must define security roles and responsibilities, develop top-level policies and exercise oversight. This means reviewing findings from critical activities; receiving regular reports on intrusions, system usage and compliance with policies and procedures; and reviewing the security plans and budget.”
This is information is not legal guidance to comply with an organization’s best efforts requirements. The information is provided to bring awareness to the importance the board and senior management’s participation to ensure all bases are covered in cyberrisk. The CRFC’s fifth pillar completes the framework as a link to existing standards of information security with an enhanced approach that includes cognitive science.
A cognitive risk framework for cybersecurity represents an opportunity to accelerate advances in cybersecurity and enterprise risk management simultaneously. A convergence of technology, data science, behavioral research and computing power are no longer wishful thinking about the future. The future is here but in order to fully harness the power of these technologies and the benefits possible IT security professionals and risk managers, in general, need a guidepost for comprehensive change. The cognitive risk framework for cybersecurity is the first of many advances that will change how organizations manage risk now and in the future in fundamental and profound ways few have dared to imagine.
Simplicity may conjure up thoughts of inner peace and contemplative musings about self-actualization, but that is not the kind of simplicity I am referring to. The concept of simplicity that I think is really interesting is the challenge of making the complex simple. I am referring to the kind of simplicity that Steve Jobs imagined when he changed how we use technology.
Jobs redesigned how our brains interact with technology without our realizing we were participating in a brain hack! The ecosystem Apple created with the Mac and mobile devices via the Apple Store is a stroke of genius and an answer to an interesting problem. How to make technology so simple everyone on the planet can use it right out of the box?
“The reason that Apple is able to create products like the iPad is because we’ve always tried to be at the intersection of technology and the liberal arts. To be able to get the best of both. To make extremely advanced products from a technology point of view, but also have them be intuitive easy-to-use, fun-to-use, so that they really fit the users. The users don’t have to come to them, they come to the user. And it’s the combination of these two things that I think has let us make the kind of creative products like the iPad,” quote from Steve Jobs 2010’s introduction of the iPad.
Supposedly, the idea of a smart phone had been discussed long before Apple created the first iPhone but no one was able to put all the pieces together in the way Jobs did. The lesson from Apple’s success should not be that simplicity is too hard to conceive; instead, we must reframe simplicity as the end goal.
By reframing simplicity as the end goal, Jobs was able to see how multiple devices, such as, the Walkman (remember those?), cameras, and phones could be integrated seamlessly into one device? Jobs showed how a focus on solving the problems that led to the cause of poor customer experience helps create higher profits, customer loyalty and shareholder value; not the other way around. More importantly than the technology, Jobs chose not to control how the devices were used which harnessed yet another eco-system of developers and innovators who shared in Apple’s ascent to become the most profitable company on the planet. In other words, simplification led to organic iterations of new services spawning demand globally for all things Apple.
This raises very interesting questions about how we deal with risks or solve complex problems that appear to be intractable. If complexity is a product of our own design what can we learn from Apple’s lessons in simplicity? You might be surprised that simplicity is a topic being studied and tested in real world scenarios.
About 5 years ago a new website called the SimplicityIndex was created by Siegel+Gale, a global brand strategy, design and experience firm to understand the role simplicity plays in brand awareness and loyalty. The Simplicity Index explains how customers perceive the simplicity of a company’s products and services: Easy to Understand; Transparent and Honest; Making Customers Feel Valued; Innovative and Fresh; and, Useful to Customers. The simplicity attributes of brand leads to measurable benefits in higher profitability, customer loyalty and premium pricing because of the perceived value.
Simplicity can be quantified and measured in real returns to organizations!
The power of simplicity is much bigger than a product strategy! Consider how risk management could be transformed if internal controls and compliance were redesigned to make it simple for employees to get their work done or follow the rules? Simplicity requires that we ask non-intuitive questions such as why must we continue to operate the way we do or what barriers to simplicity exist for customers and employees? Is it time to reconsider how the attributes from the Simplicity Index serve as the end game, not a mission statement with no real strategy of execution?
While you ponder those questions we should also ask why complexity is the norm and not simplicity. There are no simple answers but there are examples from network engineering of how overly complex network security design leads to vulnerabilities in cybersecurity.
The concept of Robust Yet Fragile
Engineers of computer networks are well versed in how complexity builds as well-meaning security professionals add controls and policies in response to threats and weaknesses without considering the impact to network fragility over time.
John Doyle, the John G Braun Professor of Control and Dynamical Systems, Electrical Engineering, and BioEngineering at the California Institute of Technology, introduced the concept of “Robust Yet Fragile” (“RYF”) paradigm to explain the five components of network design used to build a robust system.
Each design component is built on the concept of adding robustness to networks to handle today’s evolving business needs. “Reliability is robustness to component failures. Efficiency is robustness to resource scarcity. Scalability is robustness to changes in the size and complexity of the system as a whole. Modularity is robustness to structure component rearrangements. Evolvability is robustness of lineages to changes on longtime scales.
The graph in the above exhibit describes the optimal point of robust network design. “Like all systems of equilibrium, the point at which robust network design leads to unnecessary complexity is the paradox faced by security professionals and systems architects. Systems, such as the Internet, are robust for a single point of failure yet fragile to a targeted attack. As networks bolt on more stuff to build scale the weight of all that stuff becomes more risky,” according to Doyle.
Doyle’s warnings about internet security also applies to enterprise risk management. Does anyone really believe that every employee understands how to operationalize all of the myriad policies and procedures put into effect each year? If so, you may be operating in the Domain of the Fragile and not aware of the vulnerabilities lurking around the corner.
How does an organization reframe simplicity?
The answer to that question is different by industry and organizational culture. A better way to answer the question is to pose new questions for you to consider in your organization. For example, has the cost of critical operational functions increased at a higher rate than the benefits? How difficult is it for management to get timely answers about customer profitability, enterprise risk or financial performance? Are you losing customers because you are difficult to do business with? Are your employees empowered to solve risks on their own or given the tools to improve the customer experience? As you can see the list of possibilities are endless; however, if you are not aware of the answers to these questions you are operating in the Domain of the Fragile.
Board governance is one place where the example of simplicity can be modeled from the top down. Directors have an opportunity to reframe success and reduce risk with a focus on simplicity. Simplicity is not just a focus on Less but a renewed focus on Better. Simplicity is not about doing “more” with “less”; it’s about doing less to achieve more!
As you consider new strategies for 2016 and beyond how you reframe simplicity may be the difference in success or failure for years to come.
R.I.S.K. is the next generation chief risk/audit/compliance/IT security officer who is capable of processing billions of bits of data, analyzing behavioral patterns, assess changes in internal controls and tackle cyber risks within seconds of an attack. R.I.S.K. does not command a salary, go on vacation, require a pension or healthcare benefits nor complain about not having enough budget or resources to get their job done.
What is R.I.S.K.? Risk Intelligent Systems Knowledgeware is a concept that I created to describe a collection of informatics applications that are in development today designed to tackle the challenge of tomorrow’s complex risk problems. If you think this is some far-fetched science fiction story about risk management you simply have not done your homework. Let me explain why risk management, as you know it today, will never be the same and is going through a major transformation never before seen.
Intelligence and security informatics (“ISI”) is defined as the development of advanced information technologies, systems, algorithms, and databases for international, national, and homeland security-related applications, through an integrated technological, organizational, and policy-based approach. Academics, military researchers, systems programmers and information security engineers are exploring a range of advanced technologies to address tomorrow’s threats. Disparate teams from around the world are separately; and in collaborative partnership, working on first generation smart systems to redefine how risk management and cyber security will be prosecuted in the very near future. While it is true that much of this research is very early stage it is also true that practical applications are being used today.
What is driving this change? Every organization is impacted by the speed of change and volumes of data generated by regulation and our 24/7 online, on-all-the-time, networked environment. Whether you work in a government agency, small business or global corporate enterprise humans candidly cannot keep up without the assistance of technology. It would be naïve to assume that risk, audit, IT security and compliance professionals have the ability to assess the health of an entire organization by reviewing a fraction of the internal controls and enterprise threats that endlessly flow through every firm.
Risk professionals spend 80% or more of their time focused on high frequency, low impact risks because it is easy to capture yet only creates a false sense of security. The phenomenon is called cognitive overload and creates a distraction from the true risks that threaten organizations. This is the primary reason organizations are “surprised” when a major control failure disrupts business or security professionals fail to keep up with cyber threats. Conventional risk practice is not enough! Unfortunately, risk professionals cling to ineffective risk practice without questioning outcomes or seeking alternatives.
So what are the implications of this transformation in risk management? First of all, it is important to understand that this change has already begun and will speed up rapidly as new technology is brought to bear to address risks. Open source intelligence is increasingly being used in security related applications. Hundreds of cyber security vendor applications have been launched in the last 3-5 years and behavioral defense systems have been deployed to identify patterns of insider threats to proprietary corporate data.
As these systems and their developers learn from their early stage experience more advanced applications will be deployed very rapidly. Artificial intelligence and machine learning are playing a larger role in cybersecurity, which can in theory help companies identify risks and anticipate problems before they occur. The idea is to create software that can adapt and evolve to combat ever-changing attack strategies, or identify patterns of suspicious behavior.
Traditional security mechanisms have leveraged rule, pattern, signature and algorithm-based approaches to detect threats, and that’s a problem, according to Paul Stokes, CIO of the University of Victoria in British Columbia. “These approaches require constant care and feeding to identify and mitigate security threats,” he said. “I think machine learning changes the game.”
The risk professional of the future will be more defined in skill set and come from a diverse set of deep domain expertise beyond audit, legal, operations or generalist oriented backgrounds. Risk engineers will increasingly become a new title bestowed on security professionals able to design or deploy systems with intelligence custom fit to the organization’s risk. The cost of risk, compliance and audit will be streamlined and spread across resources more effectively targeting real threats to the enterprise. These changes were unimaginable a mere 5 years ago but are becoming a reality today.
The question is are you prepared or do you ignore the change until you are replaced by R.I.S.K.?