Tag Archives: cognitive risk management

2019-11-22 by: James Bone Categories: Risk Management The 3 Final Pillars of the Cognitive Risk Framework Understanding the Elements of the CRF for Cybersecurity and ERM

The five pillars of the cognitive risk framework (CRF) are designed to provide a 3D view of enterprise risks. James Bone details here additional levers of risk governance in the final three pillars of the CRF.

In earlier installments, James discussed the first pillar of the Cognitive Risk Framework (CRF), cognitive governance; the five principles undergirding cognitive governance; and the second pillar, intentional control design.

Intentional design, the second pillar, represents a range of solutions designed to manage risks writ large and small. It begins with a clear set of strategic objectives; leverages empirical, risk-based data; then clarifies optimal outcomes. Simplicity in design is the guiding principle in intentional design. Intentional design leverages cognitive governance’s five principles by applying risk-based data to understand how poor workplace design contributes to inefficiencies and hinders employee performance. And intentional design makes the case for design as a risk lever to create situational awareness and incorporates resilient operational excellence into risk management practice. It is an outcome of the analysis in cognitive governance.

Conversely, the third pillar, intelligence and active defense, is a focus on proactive risk enabled by the first two pillars. The first pillar, cognitive governance, is the driver of solution templates for sustainable outcomes using a multidisciplinary approach to address. The next three pillars are additional levers of risk governance.

Introduction

In the early stage of adopting enterprise risk management approaches to information security, Chief Information Security Officers (CISOs) have long recognized the importance of data. However, data alone is not intelligence against an adversary who understands that user behavior is the critical path to achieving its objectives.

CISOs wear multiple hats when multitasking may not be the right approach. One of the many challenges in cyber risk is understanding how to wade through voluminous data to defend against adversaries who operate in stealth mode with weapons that evolve at digital speed. CISOs need help, not a new job title.

Advanced technology provides security professionals with the analytical firepower to analyze data across a variety of threat vectors. CISOs understand that technology only creates scale toward achieving a robust toolset to address the full spectrum of evolving threats. Unfortunately, with billions spent on cybersecurity, a commensurate increase in cyber theft continues to outpace security defenses – the cyber paradox continues!

Intentional design, the second pillar, is a lever to facilitate the third pillar, intelligence and active defense strategies. Designing the right solution set for cybersecurity must also address impacts on IT, organizational and leadership resources. The prime target of attack is the human asset as the weakest link in security. CISOs need help separating the myth of assurance from the reality that exists in their network. Internal and external intelligence about the true nature and changing behavior of threat actors is critical for gaining real insights.

A core use case in cybersecurity targets the insider threat while the biggest cause of data breach is human error. IT security needs empirical data that it can rely on instead of conventional wisdom.

The elephant in the room continues to be a lack of credible data.

IT professionals need a variety of analytical methods to better understand what is actually possible with technology and how to support and enable human assets to recognize and address threats more efficiently.

Threat actors understand the dilemma CISOs face, and they have learned to exploit it to their advantage. This is why CISOs and Chief Risk Officers must collaborate to design not only cyber solutions that deal with poor work processes, but also design continuous intelligence to monitor evolving threats without piling on inefficient security processes. Active defense has been developed as a proactive security response to threat actors. Advanced approaches will emerge out of what is learned by using similar and even more effective tactics.

CISOs have slowly adopted active defense, yet these proactive approaches are becoming more common. Active defense is not hacking the hackers, but the process of designing traps, like honey pots, allowing IT professionals to gain intelligence on threat actor’s and mitigate damage in the event of a breach.

Cyber risk is one of the most complicated risks our nation faces due to the asymmetric nature of the actors involved. Cyber risk is a profitable enterprise in the Dark Web, with everyone contractors-for-hire to designers of advanced threats at the top of the food chain. The market for the most effective tools to achieve criminal objectives ensures that each version is enhanced to gain market share. One-dimensional approaches will continue to leave organizations vulnerable to these threats. An open internet and reverse engineering of customer-facing tech ensures the cyber arms race will only accelerate in the Dark Web. Forward-looking security officers seek solutions that address the complexity of the problem with more comprehensive approaches.

Extensive work done by security researchers has demonstrated that many of the attacks are the result of simple vulnerabilities that have been exploited by attacking human behavior. Simple attacks do not imply a lack of sophistication; hackers use deception in an attempt to cover their tracks and obscure attribution. As organizations push forward with new digital business models, a more thoughtful approach is needed to understand security at the intersection of technology and humans in a networked environment with no boundaries.

Pillar 3: Intelligence and Active Defense

This summer I attended a cybersecurity conference and sat in on a demonstration of social engineering. The demonstration included a soundproof booth with a hacker calling a variety of organizations using different personas. The “targets” included every level of the organization; the task was completing a checklist of items that would be used to initiate an attack.

The audience was enthralled at the ease which many of the “targets” agreed to unwittingly assist contestants in the demonstration. Collectively, these approaches are called cognitive hacks, where the target of the hack relies on changing users’ perceptions and behaviors to achieve its objectives.

The demonstration showed how hackers conduct human reconnaissance before an attack is launched. In some of the demonstrations, the “target” was asked to click on links; many complied, enabling the hacker to gain valuable information about the training, defensive strategies and other information tailored to enable an attack on the firm. Prior to the event, contestants developed a dossier on the firms through publicly available information. A few of the “hackers” were not completely successful, but in less than one hour, several contestants showed the audience the simplicity of the approach and the inherent vulnerability of defenses in real time. This is one of many techniques used to get around the myriad cyber defenses at sophisticated organizations.

Deception in the internet is the most effective attack vector, and the target is primarily the human actor.

Pillar three, intelligence and active defense, focuses on the “soft periphery” of human factors. These factors center on the human interaction with technology. As demonstrated above, technology is not needed beyond a phone call. No organization is immune to a data breach when cybersecurity is focused on either detection or prevention using technology.

Defensive and detection strategies, or a combination of approaches, must include the human element. A fourth approach is available that includes a focus on hardening human assets across the firm. Attackers need only to be successful once, while defenders must be successful 100 percent of the time; thus, the asymmetry of the risk.

How does an organization harden the soft periphery of human factors beyond training and awareness? The first step is to recognize there is more to learn about how to address human factors. Technology firms have only begun to explore behavioral analytics solutions using narrow models of behavior. Second, the soft periphery of human factors is a risk-based analysis of gaps in security created by human behavior inside and outside the organization.

Business is conducted in a boundaryless environment that includes a digital trail of forensic behavior that can be weaponized by adroit criminal actors. Defining critical behavioral threats require intelligence that is early-stage; therefore, CISOs must consider how behavior creates fragility in security, then use risk-based approaches to mitigate. Each organization will exhibit different behavioral traits that may lead to vulnerabilities and must be better understood.

As organizations rush to adopt digital strategies, the links between customer and business partner data may unwittingly create fragility in security at the enterprise. Organizations with robust internal security may be surprised to learn how fragile their security profile is when viewed across relationships.

IT professionals need intelligence to assess their robust, yet fragile security posture. Internet of things (IoT), cloud platforms and third-party providers create fragility in security defenses that leave organizations exposed. Organizational culture is also a driver of these behavioral threats, including decision-making under uncertainty.

“A great civilization is not conquered from without until it has destroyed itself from within.”
— Ariel Durant

The third pillar proposes the following proactive approaches as additional levers:

  • Active defense
  • Cyber and human intelligence analysis
  • IT security/compliance automation
  • Enhanced cyber hygiene – internal and external human actors
  • Cultural behavioral assessment and decision analysis

To save time to cover the remaining pillars, I will not explore these five levers at length; however, I have provided examples in supporting reference for readers to explore on their own. My goal here is to suggest an approach that takes into account the human element in a more comprehensive way. As each pillar is implemented, a three-dimensional picture of risk becomes clear. Intelligence is a design element that adds clarity to the picture.

By way of example, the organizations that fared better than others during the live hack-athon were the ones whose employees practiced strong skepticism and insisted that the caller validate their information with emails, a callback number and a name of a supervisor in the firm. These requests stopped the contestants during the hacker demonstration when the “targets” were persistent in their requests for validating information.

Pillar four is the next lever that deepens insight into enhanced risk governance.

Pillar 4: Cognitive Security and the Human Element

The future of risk governance and decision support will increasingly include the implementation of intelligent informatics. Intelligent informatics is an emerging multidisciplinary approach with real-world solutions in medicine, science, government agencies, technology and industry. These smart systems are being designed to combine computation, cognition and communications to derive new insights to solve complex problems.

We are in the early stage of development of these systems, which include the following burgeoning fields of research:

  • machine learning and soft computing;
  • data mining and big-data analytics;
  • computer vision and pattern recognition; and
  • automated reasoning.

The truth is, many of the functions in risk management, compliance, audit and IT security can and will be automated, providing organizations real-time monitoring and analysis 24/7/365, but humans will be needed to decide how to respond.

Advances in automation will both provide new strategic insights not possible by manual processes and also free risk professionals to explore areas of the business that have been inaccessible before. This is not science fiction! Real-life examples exist today, including nurses using clinical decision support systems (CDSS) to improve patient outcomes. An innovation in risk management will be the advent of decision support systems across a range of solutions. These new technologies will allow risk professionals to design solutions that drive decision-making deep into the organization and provide senior management with actionable information about the health of the organization in near real-time.

Risk management technology has rapidly evolved over the last 20 years, from compliance-based applications to integrated modules that automate oversight. GRC applications today will pale in comparison to intelligent informatics that will apply internal and external data pushed to decision-makers at every level of the organization. We are at a crossroads: Organizations are operating with one foot in the 19th century and the other foot racing toward new technology without a roadmap.

Technology solutions that do not improve decision-making at each level of the organization may hinder future growth by adding complexity. A strategic imperative for decision support will be driven by factors associated with cost, competition, product and increased regulatory mandates that challenge organizational resilience. Intelligent informatics will be one of many solutions enabling the levers of the human element.

The human element is the empowerment of every level of the organization by imparting situational awareness into performance, risks, efficiency and decision-making, combined with the ability to adjust and respond in a timely manner.  Risk systems are getting smarter and faster, but the real power will only be realized by how well risk professionals learn to leverage these tools to design the solutions needed to help organizations achieve their strategic objectives.

Decision support and situational awareness is the final pillar of a cognitive risk framework. The end product of a cognitive risk framework is the creation of a robust decision support infrastructure that enables situational awareness. A true ERM framework is dynamic, continuously improving and strategic, adding value through actionable intelligence and the capability to respond to a host of threats.

Pillar 5: Decision Support and Situational Awareness

Organizations too often say “everyone owns risk,” but then fail to provide employees with the right tools to manage risks. Risk professionals will continue to be behind the curve without a comprehensive approach for thinking about how to create an infrastructure around decision support and situational awareness.

The five pillars of a cognitive risk framework are designed to provide a roadmap to become a resilient organization. Resiliency will be defined differently by each organization, but the goals inherent in a cognitive risk framework lead to enhanced risk awareness and performance and provides every level of the organization with the right tools to manage their risks within the parameters defined by cognitive governance.

Nimbleness is often cited as an aspirational attribute of a resilient organization; a nimble organization increasingly resembles a technology platform with operational modules designed to scale as needs change. Nineteenth-century organizations are more rigid by design, which reduces their responsiveness to change in comparison to a virtual economy, in which change only requires a few keystrokes. The retail apocalypse is just one of many examples of the transformations to come. A smooth transition to the fourth industrial revolution may depend largely on a digital transformation of the back office and operations.

This dilemma reminds me of a Buddhist saying: “If you meet the Buddha on the road, kill him,” a saying that suggests we need to be able to destroy our most cherished beliefs. We can grow only if we are able to reassess our belief system. To do this, we need to detach ourselves from our beliefs and examine them; if we are wrong, then we must have the mental strength to admit we are wrong, learn and move on.”

Enterprise risk management has become the Buddha, and it is elusive even for the most sophisticated organizations.

A cognitive risk framework builds on traditional ERM approaches to put the human at the center of ERM with the tools to manage complex risks. Isn’t it time to infuse the human element in risk management?

Each of the five pillars have been presented at a minimal level of detail, but there is a tremendous amount of scientific research supporting each of the approaches to achieve a heightened level of maturity for risk governance. A cognitive risk framework for cybersecurity and ERM is the only risk framework based on Nobel-Prize-winning research from Herbert Simon to Dan Kahneman and Paul Slovic, today’s contemporaries of modern risk thinking.

I have referred to cognitive hacks throughout the installments of a cognitive risk framework. Cognitive hacks – which do not require a computer and are based on an attack with the chief objective of changing the behavior of the target to achieve its goals – were first recognized by researchers at the Center for Cognitive Neuroscience at Dartmouth. Hackers have deployed variations of cognitive hacks, such as phishing, social engineering, deception, deepfakes and other methods since the beginning of the internet. The sophistication and deception of these attacks is growing in sophistication, as demonstrated in the 2016 election and continuing into 2020.

Cognitive hacks are a global threat to growth in the fourth industrial revolution. Wholesale destruction of institutional norms of behavior and discourse on the internet are symptoms of the pervasive nature of these attacks. Criminals have weaponized stealth and trust on the internet through intelligence gained by our behavior on the internet. A hidden market of our digital footprint is traded by legitimate and illegitimate players with little to no oversight. Government and business leaders have been caught flat-footed in this cyberwar; IT and risk professionals on the front line of defense lack the resources and tools to effectively prevent the spread of the threat.

Cognitive hacks prey on our subconscious, our biases and heuristics of decision-making. A cognitive risk framework was designed to bring awareness to this growing threat and create informed decision support and situational awareness to counter these threats.

A complete copy of a cognitive risk framework will soon be developed, including a more detailed version in 2020, and more advanced versions of a cognitive risk framework will be developed by others, including myself. I want to thank Corporate Compliance Insights for allowing me to introduce this executive summary.


CSO: Cyber security culture is a collective effort.

Deloitte: Cultivating a cyber risk-aware culture

Forbes: 5 Ways to Fight Back Against Cybersecurity Attacks: The Power of Active Defense

Tripwire: Active Defense: Proactive Threat Intelligence with Honeypots

Forbes: Russian Hackers Disguised as Iranian Spies Attacked 35 Countries

DARPA: Active Cyber Defense (ACD)

DARPA: Cyber-Hunting at Scale

DARPA: Cyber Assured Systems Engineering (CASE)

Norton: Good cyber hygiene habits to help stay safe online

Digital Guardian: Enterprise Cyber Hygiene Best Practices: Tips & Strategies for Your Business

NCBI: Improving Cultural Competence

The University of Kansas Department of Electrical Engineering and Computer Science: Intelligent Informatics

Canadian Journal of Nursing Informatics: Nursing and Artificial Intelligence

Dartmouth: Center for Cognitive Neuroscience

World Economic Forum: The Fourth Industrial Revolution: what it means, how to respond

2019-08-28 by: James Bone Categories: Risk Management Cognitive Governance: 5 Principles

https://www.linkedin.com/posts/chiefriskofficer_cognitive-governance-5-principles-corporate-activity-6572434287906308097-sl8Y

James Bone explores cognitive governance, the first pillar of the cognitive risk framework, and the five principles that drive the framework to simplify risk governance, add new rigor to risk assessment and empower every level of the organization with situational awareness to manage risk with the right tools.

The three lines of defense (3LoD), or more specifically, risk governance is being rethought on both sides of the Atlantic.[1],[2],[3] A 3LoD model assigns three or more defensive lines of accountability to protect an organization in the same vein as Maginot’s Lines of Defense to defend Verdun.[4] IT security also adopted layered security and controls, but is now evolving to incorporate risk governance approaches. The Maginot Line was considered state of the art for defensive wars fought in trenches, yet vulnerable to offensive change in enemy strategy. Inflexibility in risk practice design and execution is the Achilles’ heel of good risk governance. In order to build risk programs that are responsive to change, we must redesign the solutions we are seeking in risk governance.

A cognitive risk framework clarifies risk governance and provides a pathway for organizations to understand and address risks that matter. There are many reasons 3LoD is perceived to not meet expectations, but a prominent one is unresolved conflicts in perceptions of risk …. the human element.[5]Unresolved conflicts about risk undermine good risk governance, trust and communication.

In Risk Perceptions, Paul Slovic reflected on interpersonal conflicts: “Can an atmosphere of trust and mutual respect be created among opposing parties? How can we design an environment in which effective, multiway communication, constructive debate and compromise take place?”[6]

A cognitive risk framework is designed to find simple solutions to risk management through a focus on empowering the human element. Please keep this perspective in mind as you digest the five principles of cognitive governance.

Building blocks for a cognitive risk framework

Principle #1: Risk Governance

Risk governance continues to be a concept that is hard to grasp and elusive to define in concrete terms. Attributes of risk governance such as corporate culture, risk appetite and strategy are assumed outcomes, but what are the right inputs to facilitate these behaviors? Good risk governance is sustainable through simplicity and design. In an attempt to simplify risk governance two inputs are offered: discovery and mitigation.

Risk governance is presented here as two separate and distinct processes:

Risk Assessment (Discovery) and Risk Management (Mitigation)

Risk management is often conflated to include risk assessment, but the skills, tools and responsibility to adequately address these two processes require risk governance to be separate and distinct functions. This may appear to be counterintuitive at first glance, but too narrow a focus on either the mitigation of risk (management) or the discovery of risk (assessment) limit the full spectrum of opportunities to enhance risk governance.

Why change?

Risk analysis is a continuous process of learning and discovery inclusive of quantitative and qualitative methods that reflect the complexity of risks facing all organizations. Risk analysis should be multidisciplinary in practice, borrowing from a variety of analytical methodologies. For this reason, a specialized team of diverse risk analysts might include data scientists, mathematicians, computer scientists (hackers), network engineers and architects, forensic accountants and other nontraditional disciplines alongside traditional risk professionals. The skill set mix is illustrative, but the design of the team should be driven by senior management to create situational awareness and the tools needed to analyze complex risks. More on this point in future installments.

This approach is not unique or radical. NASA routinely leverages different risk disciplines in preparation for space travel. Wall Street has assimilated physicists from the natural sciences with finance professionals, mathematicians and computer programmers to build risk solutions for their clients and to manage their own risk capital. Examples are plentiful in automotive design, aerospace and other high-risk industries. Success can be designed, but solving complex issues requires human input.

“Risk analysis is a political enterprise as well as a scientific one, and public perceptions of risk play an important role in risk analysis, adding issues of values, process, power and trust to the quantification issues typically considered by risk assessment professionals (Slovic, 1999)”.[7]

Separately, risk management is the responsibility of the board, senior management, audit and compliance. Risk management is equivalent to risk appetite, which is the purview of management to accept or reject. Senior executives are empowered by stakeholders inside and outside the firm to selectively choose among the risks that optimize performance and avoid the risks that hinder. Traditional risk managers are seldom empowered with these dual mandates, and I don’t suggest they should be.

In other words, risk management is the process of selecting among issues of value, power, process and trust in the validation of issues related to risk assessment. To actualize the benefits of sustainable risk governance, advanced risk practice must include expertise in discovery and mitigation. Organizations that develop deep knowledge in both disciplines and master conflicts in perceptions of risk will be better positioned for long-term success.

Experienced risk professionals understand that without the proper tone at the top, even the best risk management programs will fail.[8] Tone at the top implies full engagement by senior executives in the risk management process as laid out in cognitive governance.[9] Developing enhanced risk assessment processes builds confidence in risk-management decisions through greater rigor in risk analysis and recommendations to improve operational efficiency.[10] Risk governance (Principle #1) transforms assurance through perpetual risk-learning.

Principle #2, perceptions of risk, provides an understanding of how to mitigate the conflicts that hurt cognitive governance.

Principle #2: Perceptions of Risk

Risk should be a topic upon which we all agree, but it has become a four-letter word with such divergent meanings that a Google search results in 232 million derivations! The mere mention of climate change, gun control or any number of social or political issues instantly creates a dividing line that is hard, if not impossible, to penetrate. Many of these conflicts are based on deeply held personal and political beliefs that are intractable even in the face of science, data or facts, so how does an organization find common ground?

In discussing this issue with a chief operations officer at a major international bank, I was told, “we thought we understood risk management until the bank almost failed in the 2008 Great Recession.” The truth is, most organizations are reluctant to speak honestly about risks until it is too late or only after a “near miss.” In other words, risk is an abstract concept until we experience it firsthand.[11] As a result, each of us bring our own unique experience of risk into any discussion that involves the possibility of failure. These unresolved conflicts of perceptions of risk create friction in organizations, causing blind spots that expose firms to potential failure, large and small.

But why is perception of risk important?

Each of us bring a different set of personal values and perspectives to the topic of risk. This partly explains why sales people view risks differently than say, accountants; risk is personal and situational to the people and circumstances involved. The vast majority of these conflicting perceptions of risk are well-managed, but many are seldom fully resolved, leading to conflicts that impede performance.

Risk professionals must become attuned to and listen for these conflicts, because they represent signals about risk. Perceptions of risk represent how most people feel about a risk, inclusive of positive or negative outcomes from their own experience. Researchers view risk as probability analysis. Understanding and reconciling these conflicts in perceptions of “risk as feelings” and “risk as analysis” is a low-cost solution that releases the potential for greater performance. Yet the devil in the details can only be fully uncovered through a process of discovery.

Principle #1 (risk governance) acts as a vehicle for learning about risks that enlightens principle #2 (perceptions of risk). Even the most seasoned executive is prone to errors in judgment as complexity grows. However, communications about risk are challenging when we lack agreed-upon procedures to reconcile these conflicts.

Albert Einstein provided a simple explanation:

“Not everything that counts can be counted, and not everything that can be counted counts.”

He knew the difference requires a process that creates an openness to learning.

Principle #1 (risk governance) formalizes continuous learning about risks in order to avoid analysis paralysis in decision-making. Risk governance focuses on building risk intelligence. Principle #2 (perceptions of risk) leverages risk intelligence to fill in the gaps data alone cannot.

Perceptions of risk are complex, because they are seldom expressed through verbal behavior. In other words, how we act under pressure is more powerful than mission statements or even codes of ethics![12] We say we are safe drivers, but we still text and drive. People take shortcuts when their jobs become too complex, leading to risky behavior.[13] Unknowingly, organizations are incentivizing the wrong behaviors by not fully considering the impacts on human factors.

Surprisingly, cognitive governance means fewer, simple rules instead of more policies and procedures. Risk intelligence narrows the “boil the ocean” approach to risk governance. The vast majority of risk programs spend 85 to 95 percent of 3LoD resources on known risks, leaving the biggest potential exposure, uncertainty, unaddressed.

Again, risk governance is about learning what the organization really values and why.

Organizations must begin to re-design the inputs to risk governance. The common denominator in all organizations is the human element, yet its impact is discounted in risk governance.

Principle #3: Human Element Design

A Ph.D. computer scientist friend from Norway once told me that organizations have a natural rhythm, like a heartbeat, and that cyber criminals understand and leverage this to plan their attacks.[14] Busy, distracted and stressed-out workers are generally more vulnerable to cyberattack. No amount of controls, training, punishment or incentives to prevent phishing attacks or other social engineering schemes is effective in poorly designed work environments, including the C-suite and rank-and-file security professionals.[15]

Cyber criminals understand the human element better than all risk professionals!

Human element design is an innovation in risk governance. Regulators have also begun to include behavioral factors, such as conduct risk, ethics and enhanced governance in regulation, but thus far, the focus is primarily on ensuring good customer outcomes. Sustainable risk governance must consider human factors a tool to increase productivity and reduce risk.[16],[17]

Human element design is evolving to address correlations and corrective actions in human factors and workplace errors, information security and operational risk.[18],[19],[20],[21],[22] Principles #1 (risk governance) and #2 (perceptions of risk) assist principle #3 (human element design) in defining areas of opportunity to increase efficient operations and reduce risk in human factors.

Decades of research in human factors in the workplace has led to productivity gains and reductions in operational risk across many industries. We take for granted declining injury rates in the auto and airline industries attributed to human factors design. Simple changes, such as seatbelts and navigation systems in cars and pilot to co-pilot communications during take-offs and landings are just as important — if not more so — as automation and big data projects.

So, why is it important to focus on the human element more broadly now?

The primary reason to focus on the human element now is because technology has become pervasive in everything we do today. Legacy systems, outsourcing, connected devices and networked applications increase complexity and potential risks in the workplace. The internet is built on an engineering concept that is both robust and fragile, meaning users have access to websites around the world, but that access is subject to failure at any connection. Digital transformation extends and expands these new points of fragility, obscuring risk in a cyber void. In the physical world, humans are more aware of risk exposures. In a digital environment, risks are hidden beneath complexity.

Technology has driven productivity gains and prosperity in emerging and developed economies, adding convenience to many parts of our lives; however, cyber risks expose inherent vulnerabilities in cobbled-together systems. Email, social media, third-party partners, mobile devices and now even money move at speeds that increase the possibility for error and reduce our ability to “see” risk exposures that manifest within and beyond our perceptions of risk.

Developers and users of technology must begin to understand how the design and implementation of digital transformation create risk exposures. A “rush to market” mindset has put security on the back burner, leaving users on their own to figure it out instead of making security a market differentiator. Technology developers must begin to collaborate on how security can be made more intuitive for users and tech support. Tech SROs (self-regulatory organizations) are needed to stay ahead of bad actors and government regulation. Users must also understand the limits of technology to solve challenges by building in accommodations for how people work together, share and complete specific tasks.

Instead of adopting simple issues like the insider threat that pale in comparison to the larger issue of the human element, we miss the forest for the blades of grass. The first two principles are designed to support improvements in the human element, but a new risk practice must be developed with the end goals of simplicity, security and efficient operations as products of risk governance.

I will address cognitive hacks separately;[23] these are some of the most sophisticated threats in risk governance and require special treatment.

The human element principle is a focus on designing solutions that address cognitive load, build situational awareness and manage risks at the intersection of the human-to-human and human-to-machine interaction.[24],[25],[26] Apple, Amazon, Twitter and others have learned that simplicity works to promote human creativity for growth. Information security and risk governance must become intuitive and seamless to empower the human element.

This topic will be revisited in intentional design, the second pillar, but for now, let’s suffice it to say that a focus on the human element will create a multiplier effect in terms of productivity, growth, new products and services that do not exist today. Each of the five principles are a call to action to think more broadly about risks today and the future.

For now, let’s move on to principle #4, intelligence and modeling.

Principle #4: Intelligence & Modeling

“All models are wrong, but some are useful”
– George Box, Statistician

Box’s warning referred to the inclination to present excessively elaborate models as more correct than simple models. In fact, the opposite is true: Simple approximations of reality may be more useful (e.g., E=MC2). More importantly, Box further warned modelers to understand what is wrong in the model. “It is inappropriate to be concerned about mice when there are tigers abroad (Box 1978).” Expanding on Box’s sentiment, I would add that useful models are not static and may become less useful during a change in circumstances or as new information is presented.

For example, risk matrices have become widely adopted in risk practice and, more recently, in cybersecurity. A risk matrix is a simple tool to rank risks when users do not have the skill or time to perform more in-depth statistical analysis.[27] Unfortunately, risk matrices have been misused by GRC consultants and risk practitioners, creating a false sense of assurance among senior executives. Good risk governance demands more rigor than simple risk matrices.

First, I want to be clear that the business intelligence and data modeling principle is not proposed as a big data project. Big data projects have gotten a bad rap, with conflicting examples of hype about the benefits, as well as humbling outcomes as measured in project success.[28] Principle #4 is about developing structured data governance in order to improve business intelligence for better performance.

Let me give you a simple example: In 2007, prior to the start of the Great Recession, mutual funds had used limited amounts of derivatives to manage risk and boast returns. Wall Street began to increase leverage using derivatives to gain advantage; however, firms relied on manual processes and were unable to easily quantify increased exposure to counterparty risk.[29] A simple question like “what is my total exposure?” took weeks — if not months — to gather and did not include comprehensive answers about impacts to fund performance if specific risk scenarios occurred. We know what happened in 2008, and many of those risks materialized without the risk mitigation needed to offset downside exposure.[30]

Without getting too wonky, manual operational processes for managing collateral and heavy use of spreadsheets and paper contracts slowed the response rate to answer these questions and minimize risk in a more timely manner. Organizations need to understand the strategic questions that matter and create the ability to answer them in minutes, not months. Good risk governance proactively defines strategic questions and refines them as information changes the firm’s risk profile.

Business intelligence and data modeling is an iterative process of experimentation to ask important strategic questions and learn what really matters. I separated the two skill sets because the disciplines are different and the capabilities are specific to each organization.[31],[32] The key point of the intelligence and modeling principle is to incorporate a commitment in risk governance to business intelligence and data modeling, along with the patience to develop the skills needed to support business strategy.

Principle #4 should be designed to better understand business performance, reduce inefficiencies, evaluate security and manage the risks critical to strategy. This is a good place to transition to principle #5, capital structure.

Principle #5: Capital Structure

A firm’s capital structure is one of the key building blocks for long-term success for any viable business, but too often, even well-established organizations stumble (and many fail) for reasons that seem inexplicable.[33]The CFO is often elevated to assume the role of risk manager, and in many firms, staff responsible for risk management report to a CFO; however, upon further analysis, the tools used by CFOs may be too narrow to manage the myriad risks that lead to business failure.

Finance students are well-versed in weighted average cost of capital calculations to achieve the right debt-to-equity mix. Organizations have become adept at managing cash flows, sales, strategy and production during stable market conditions. But how do we explain why so many firms appear to be caught flat-footed during rapid economic change and market disruption? Why is Amazon frequently blamed for causing a “retail apocalypse” in several industries? The true cause may be a pattern of inattentional blindness.[34]

Inattentional blindness is when an individual [or organization] fails to perceive an unexpected stimulus in plain sight. When it becomes impossible to attend to all the stimuli in a given situation, a temporary “blindness” effect can occur, as individuals fail to see unexpected (but often salient) objects or stimuli. In a Harvard Business Review article, “When Good Companies Go Bad,” Donald Sull, Senior Lecturer at the MIT Sloan School, and author Kathleen M. Eisenhardt explain that active inertia is an organization’s tendency to follow established patterns of behavior — even in response to dramatic environmental shifts.

Success reinforces patterns of behavior that become intractable until disruption in the market. According to Sull,

“Organizations get stuck in the modes of thinking that brought success in the past. As market leaders, management simply accelerates all their tried-and-true activities. In trying to dig themselves out of a hole, they just deepen it.”

This may explain why firms spiral into failure, but it doesn’t explain why organizations miss the emergence of competitors or a change in the market in the first place.

Inattentional blindness occurs when firms ignore or fail to develop formal processes that proactively monitor market dynamics for threats to their leadership. Sull and Eisenhardt’s analysis is partially correct in that when firms react, the response is typically half-baked, resulting in damage to capital — or worse, a race to the bottom.

Interestingly, Sull also suggests that an organization’s inability to change extends to legacy relationships with customers, vendors, employees, suppliers and others, creating “shackles” that reinforce the inability to change. Contractual agreements memorialize these relationships and financial obligations, but are rarely revisited after the deals have been completed. Contracts are risk-transfer tools, but indemnification language may be subject to different state laws. How many firms truly understand the risk exposure and financial obligations in legacy contractual agreements? How many firms understand the root cause of financial leakage in contractual language?[35]

Insurance companies are scrambling to mitigate cyber insurance accumulation risks embedded in legacy indemnification agreements.[36],[37]These hidden risks manifest because organizations lack formal processes to adequately assess legacy obligations, creating inattentional blindness to novel risks. Digital transformation will only accelerate accumulation risks in digital assets.

To summarize, the tools to manage capital do not stop with managing the cost of capital, cash flows and financial obligations. Capital can be put at risk by unanticipated blind spots in which risks and uncertainty are viewed too narrowly.

The first pillar, cognitive governance, is the driver of the next four pillars. The five pillars of a cognitive risk framework represent a new maturity level in enterprise risk management, which I propose to broaden the view of risk governance and build resilience to evolving threats. It is anticipated that more advanced cognitive risk frameworks will be developed by others (including myself) over time.

The treatment of the remaining four pillars will be shorter and focused on mitigating the issues and risks described in cognitive governance. Intentional design is the next pillar to be introduced.


[1] https://na.theiia.org/standards-guidance/Public%20Documents/PP%20The%20Three%20Lines%20of%20Defense%20in%20Effective%20Risk%20Management%20and%20Control.pdf

[2]https://www.digitalistmag.com/technologies/analytics/2015/09/28/understanding-three-lines-of-defense-part-2-03479576

[3] http://riskoversightsolutions.com/wp-content/uploads/2011/03/Risk-Oversight-Solutions-for-comment-Three-Lines-of-Defense-vs-Five-Lines-of-Assurance-Draft-Nov-2015.pdf

[4] https://www.thoughtco.com/the-maginot-line-3861426

[5] http://hrmars.com/admin/pics/1847.pdf

[6]https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/22394/slovic_241.pdf?sequence=1

[7]https://pdfs.semanticscholar.org/ef56/87859fc1b5d8c85997e4c142ad8a1c345451.pdf

[8] https://www.theedgemarkets.com/article/everyday-matters-tone-top-important

[9] https://ponemonsullivanreport.com/2016/05/third-party-risks-and-why-tone-at-the-top-matters-so-much/

[10] https://ethicalboardroom.com/tone-at-the-top/

[11] http://www.thepumphandle.org/2013/01/16/how-do-we-perceive-risk-paul-slovics-landmark-analysis-2/#.XTdZY5NKg1g

[12] https://www.washingtonpost.com/opinions/chances-are-youre-not-as-open-minded-as-you-think/2019/07/20/0319d308-aa4f-11e9-9214-246e594de5d5_story.html?utm_term=.a7d3b39a4da3

[13] https://hbr.org/2017/11/the-key-to-better-cybersecurity-keep-employee-rules-simple

[14] https://www.massivealliance.com/blog/2017/06/13/public-sector-organizations-more-prone-to-cyber-attacks/

[15] https://securitysifu.com/2019/06/26/cybersecurity-staff-burnout-risks-leaving-organisations-vulnerable-to-cyberattacks/

[16] https://dynamicsignal.com/2017/04/21/employee-productivity-statistics-every-stat-need-know/

[17] https://www.skybrary.aero/bookshelf/books/2037.pdf

[18]https://www.cii.co.uk/media/6006469/simon_ashby_presentation.pdf

[19]https://www.skybrary.aero/index.php/The_Human_Factors_%22Dirty_Dozen%22

[20] https://riskandinsurance.com/the-human-element-in-banking-cyber-risk/

[21] https://www.mckinsey.com/business-functions/risk/our-insights/insider-threat-the-human-element-of-cyberrisk

[22] https://us.norton.com/internetsecurity-how-to-good-cyber-hygiene.html

[23]https://www.researchgate.net/publication/2955727_Cognitive_hacking_A_battle_for_the_mind

[24] https://en.wikipedia.org/wiki/Cognitive_load

[25] https://en.wikipedia.org/wiki/Situation_awareness

[26]https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction

[27] https://en.wikipedia.org/wiki/Risk_matrix

[28] https://www.techrepublic.com/article/85-of-big-data-projects-fail-but-your-developers-can-help-yours-succeed/

[29] https://www.thebalance.com/reserve-primary-fund-3305671

[30] https://www.history.com/topics/21st-century/recession

[31] https://www.forbes.com/sites/bernardmarr/2016/01/07/big-data-uncovered-what-does-a-data-scientist-really-do/#3f10aa82a5bb

[32] https://www.datasciencecentral.com/profiles/blogs/updated-difference-between-business-intelligence-and-data-science

[33] https://hbr.org/1999/07/why-good-companies-go-bad

[34] https://en.wikipedia.org/wiki/Inattentional_blindness

[35] https://www.investopedia.com/terms/l/leakage.asp

[36]https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/risk/downloads/crs-rms-managing-cyber-insurance-accumulation-risk.pdf

[37]https://www.insurancejournal.com/news/international/2018/08/20/498584.htm

Tags: Big DataCognitive Risk Framework risk management tone at the top

2019-08-22 by: James Bone Categories: Risk Management Cognitive Governance: The First Pillar of a Cognitive Risk Framework
2019-06-11 by: James Bone Categories: Risk Management A Cognitive Risk Framework for the 4th Industrial Revolution

Introducing the Human Element to Risk Management

As posted in Corporate Compliance Insights

As we move into the 4th Industrial Revolution (4IR), risk management is poised to undergo a significant shift. James Bone asks whether traditional risk management is keeping pace. (Hint: it’s not.) What’s really needed is a new approach to thinking about risks.

Framing the Problem
Generally speaking, organizations have one foot firmly planted in the 19th century and the other foot racing toward the future. The World Economic Forum calls this time in history the 4th Industrial Revolution, a $100 trillion opportunity, that represents the next generation of connected devices and autonomous systems needed to fuel a new leg of growth. Every revolution creates disruption, and this one will be no exception, including how risks are managed.

The digital transformation underway is rewriting the rules of engagement.[1], The adoption of digital strategies implies disaggregation of business processes to third-party providers, vendors and data aggregators who collectively increase organizational exposure to potential failure in security and business continuity.[2] Reliance on third parties and sub-vendors extends the distance between customers and service providers, creating a “boundaryless” security environment. Traditional concepts of resiliency are challenged when what is considered a perimeter is as fluid as the disparate service providers cobbled together to serve different purposes. A single service provider may be robust in isolation, but may become fragile during a crisis in connected networks.

Digital transformation is, by design, the act of breaking down boundaries in order to reduce the “friction” of doing business. Automation is enabling speed, efficiency and multilayered products and services, all driven by higher computing power at lower prices. Digital Unicorns, evolving as 10- to 20-year “overnight success stories” give the impression of endless opportunity, and capital returns from early-stage tech firms continue to drive rapid expansion in diverse digital strategies.

Thus far, these risks have been fairly well-managed, with notable exceptions.

Given this rapid change, it is reasonable to ask if risk management is keeping pace as well. A simple case study may clarify the point and raise new questions.

In 2016, the U.S. presidential election ushered in a new risk, a massive cognitive hack. Researchers at Dartmouth University’s Thayer School of Engineering developed the theory of cognitive hacking in 2003, although the technique has been around since the beginning of the internet.[3]

Cognitive hacks are designed to change the behavior and perception of the target of the attack. The use of a computer is optional in a cognitive hack. These hacks have been called phishing or social engineering attacks, but these terms don’t fully explain the diversity of methods involved. Cognitive hacks are cheap, effective and used by nation states and amateurs alike. Generally speaking, “deception” – in defense or offense – on the internet is the least expensive and most effective approach to bypass or enhance security, because humans are the softest target.[4]

In “Cognitive Hack”, one chapter entitled “How to Hack an Election” describes how cognitive hacks have been used in political campaigns around the world to great effect.[5] It is not surprising that it eventually made its way into American politics. The key point is that deception is a real risk that is growing in sophistication and effectiveness.[6]

In researching why information security risks continue to escalate, it became clear that a new framework for assessing risks in a digital environment required a radically new approach to thinking about risks. The escalation of cyber threats against an onslaught of security spending and resources is called the “cyber paradox.”[7] We now know the root cause is the human-machine interaction, but sustainable solutions have been evasive.

Here is what we know…… [Digital] risks thrive in diverse human behavior!

Some behaviors are predictable, but evolve over time. Security methods that focus on behavioral analytics and defense have found success, but are too reactive to provide assurance. One interesting finding noted that a focus on simplicity and good work relations plays a more effective role than technology solutions. A recent 2019 study of cyber resilience found that “infrastructure complexity was a net contributor to risks, while the human elements of role alignment, collaboration, problem resolution and mature leadership played key roles in building cyber resilience.”[8]

In studying the phenomena of how the human element contributes to risk, it became clear that risk professionals in the physical sciences were using these same insights of human behavior and cognition to mitigate risks to personal safety and enable better human performance.

Diverse industries, such as, air travel, automotive, health care, tech and many others have benefited from human element design to improve safety and create sustainable business models. However, the crime-as-a-service (CaaS) model may be the best example of how organized criminals in the dark web work together with the best architects of CaaS products and services, making billions selling to a growing market of buyers.

The International Telecommunications Union (ITU), in publishing its second Global Cybersecurity Index (GCI), noted that approximately 38 percent of countries have a cybersecurity strategy, and 12 percent of countries are considering a strategy to cybersecurity.[9]

The agency said more effort is needed in this critical area, particularly since it conveys that governments consider digital risks high priority. “Cybersecurity is an ecosystem where laws, organizations, skills, cooperation and technical implementation need to be in harmony to be most effective,” stated the report, adding that cybersecurity is “becoming more and more relevant in the minds of countries’ decision-makers.”

Ironically, social networks in the dark web have proven to be more robust than billions in technology spending.

The formation of systemic risks in a broader digital economy will be defined by how well security professionals bridge 19th-century vulnerabilities with next-century business models. Automation will enable the transition, but human behavior will determine the success or failure of the 4th Industrial Revolution.

A broader set of solutions are beyond the scope of this paper, but it will take a coordinated approach to make real progress.

The common denominator in all organizations is the human element, but we lack a formal approach to assess the transition from 19th-century approaches to this new digital environment.[10] Not surprisingly, I am not the first, nor the last to consider the human element in cybersecurity, but I am convinced that the solutions are not purely prescriptive in nature, given the complexity of human behavior.

The assumption is that humans will simply come along like they have so often in the past. Digital transformation will require a more thoughtful and nuanced approach to the human-machine interaction in a boundaryless security environment.

Cognitive hackers from the CIA, NSA and FBI agree that addressing the human element is the most effective approach.[11] A cognitive risk framework is designed to address the human element and enterprise risk management in broader ways than changing employee behavior. A cognitive risk framework is a fundamental shift in thinking about risk management and risk assessment and is ideally suited for the digital economy.

Technology is creating a profound change in how business is conducted. The fragility in these new relationships is concentrated at the human-machine interaction. Email is just one of dozens of iterations of vulnerable endpoints inside and outside of organizations. Advanced analytics will play a critical role in security, but organizational situational awareness will require broader insights.

Recent examples include the 2017 distributed denial of service attack (DDoS) on Dyn, an internet infrastructure company who provides domain name service (DNS) to its customers.[12] A single service provider created unanticipated systemic risks across the East Coast.

DNS provides access to the IP address you plug into your browser.[13], [14] A DDoS attack on a DNS provider prevents access to websites. Much of the East Coast was in a panic as the attack slowly spread. This is what happened to Amazon AWS, Twitter, Spotify, GitHub, Etsy, Vox, PayPal, Starbucks, Airbnb, Netflix and Reddit.

These risks are known, but they require complex arrangements that take time. These visible examples of bottlenecks in the network offer opportunity to reduce fragility in the internet; however, resilience on the internet will require trusted partnerships to build robust networks beyond individual relationships.

The collaborative development of the internet is the best example of complete autonomy, robustness and fragility. The 4th Industrial Revolution will require cooperation on security, risk mitigation and shared utilities that benefit the next leg of infrastructure.

Unfortunately, systemic risks are already forming that may threaten free trade in technology as nations begin to plan for and impose restrictions to internet access. A recent Bloomberg article lays bare the global divisions forming regionally as countries rethink an open internet amid political and security concerns.[15]

So, why do we need a cognitive risk framework?
Cognitive risk management is a multidisciplinary focus on human behavior and the factors that enhance or distract from good outcomes. Existing risk frameworks tend to consider the downside of human behavior, but human behavior is not one-dimensional, and neither are the solutions. Paradoxically, cybercriminals are expert at exploiting trust in a digital environment and use a variety of methods [cognitive hacks] to change behavior in order to circumvent information security controls.

A simple answer to why is that cognitive risks are pervasive in all organizations, but too often are ignored until too late or not understood in the context of organizational performance. Cognitive risks are diverse and range from a toxic work environment, workplace bias and decision bias to strategic and organizational failure.[16], [17], [18] More recent research is starting to paint a more vivid picture of the role of human error in the workplace, but much of this research is largely ignored in existing risk practice.[19], [20], [21], [22], [23] A cognitive risk framework is needed to address the most challenging risks we face … the human mind!

A cognitive risk framework works just like digital transformation: by breaking down the organizational boundaries that prevent optimal performance and risk reduction.

Redesigning Risk Management for the 4th Industrial Revolution!
The Cognitive Risk Framework for Cybersecurity and Enterprise Risk Management is a first attempt at developing a fluid set of pillars and practices to complement COSO ERM, ISO 31000, NIST and other risk frameworks with the human at the center. Each of the Five Pillars will be explored as a new model for resilience in the era of digital transformation.

It is time to humanize risk management!

A cognitive risk framework has five pillars. Subsequent articles will break down each of the five pillars to demonstrate how each pillar supports the other as the organization develops a more resilient approach to risk management.

The Five Pillars of a Cognitive Risk Framework include:

I. Cognitive Governance
II. Intentional Design
III. Risk Intelligence & Active Defense
IV. Cognitive Security/Human Elements
V. Decision Support (situational awareness)

Lastly, as part of the roll out of a cognitive risk framework, I am conducting research at Columbia University’s School of Professional Studies to better understand advances in risk practice beyond existing risk frameworks. My goal, with your help, is to better understand how risk management practice is evolving across as many risk disciplines as possible. Participants in the survey will be given free access to the final report. An executive summary will be published with the findings. Contact me at jb4015@columbia.edu. Emails will be used only for the purpose of distributing the survey and its findings.

*Correction: The reference to Level 3 Communication experiencing a cyberattack was reported incorrectly. The reference to Level 3 is related to a 2013 outage due to a “failing fiber optic switch” not a cyberattack.  Apologies for the incorrect attribution. The purpose of the reference is related to systemic risks in the Internet. James Bone

[1] https://robllewellyn.com/10-digital-transformation-risks/

[2] https://www.information-age.com/security-risks-in-digital-transformation-123478326/

[3] http://www.ists.dartmouth.edu/library/301.pdf

[4] https://www.csiac.org/journal-article/cyber-deception/

[5] https://www.amazon.com/Cognitive-Hack-Battleground-Cybersecurity-Internal/dp/149874981X

[6] https://www.csiac.org/journal-article/cyber-deception/

[7] https://www.lawfareblog.com/cyber-paradox-every-offensive-weapon-potential-chink-our-defense-and-vice-versa

[8] https://www.ibm.com/downloads/cas/GAVGOVNV

[9] https://news.un.org/en/story/2017/07/560922-half-all-countries-aware-lacking-national-plan-cybersecurity-un-agency-reports

[10] https://www.humanelementsecurity.com/content/Leadership.aspx

[11] http://aapa.files.cms-plus.com/SeminarPresentations/2016Seminars/2016SecurityIT/Lee%20Black.pdf

[12] https://www.wired.com/2016/10/internet-outage-ddos-dns-dyn/

[13] https://public-dns.info/nameserver/us.html

[14] https://en.wikipedia.org/wiki/List_of_managed_DNS_providers

[15] https://www.bloomberg.com/quicktake/how-u-s-china-tech-rivalry-looks-like-a-digital-cold-war?srnd=premium

[16] https://healthprep.com/articles/mental-health/types-workplace-bullies/?utm_source=google

[17] https://www.forbes.com/sites/amyanderson/2013/06/17/coping-in-a-toxic-work-environment/

[18] https://knowledge.wharton.upenn.edu/article/is-your-workplace-tough-or-is-it-toxic/

[19] https://www.robsonforensic.com/articles/human-error-expert-witness-human-factors

[20] https://rampages.us/srivera/2015/05/24/errors-in-human-inquiry/

[21] https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-283

[22] https://www.jstor.org/stable/1914185?seq=1#page_scan_tab_contents

[23] https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/bounded-rationality/

2019-01-23 by: James Bone Categories: Risk Management Cognitive Hack: The New Battleground In Cybersecurity

James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance WeekCorporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.

He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.


Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?

James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.

The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.

Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?

Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?

The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.

Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?

Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.

On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.

The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.

Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?

Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.

The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.

Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.

Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.

https://www.forbes.com/sites/christopherskroupa/2016/11/21/cognitive-hack-the-new-battleground-in-cybersecurity/#746438ab7f3e

2017-10-04 by: James Bone Categories: Risk Management How to Design an Intelligent Organization

“Simplicity is the value proposition that should be expected from the implementation of modern technology solutions.” – James Bone, Executive Director, TheGRCBlueBook

“Intelligent Automation” is such a new term that you won’t find it in Wikipedia or Merriam-Webster. However, we are clearly in the early stages of a technological transformation that’s no less dramatic than the one spurred by the emergence of the Internet.

A new age in quantitative and empirical methods will change how businesses operate as well as the role of traditional finance professionals. To compete in this environment, finance teams must be willing to adopt new operating models that reduce costs and improve performance through better data. In short, a new framework is needed for designing an “intelligent organization.”

The convergence of technology and cognitive science provides finance professionals with powerful new tools to tackle complex problems with more certainty. Advanced analytics and automation will increasingly play bigger roles as tactical solutions to drive efficiency or to help executives solve complex problems.

But the real opportunities lie in reimaging the enterprise as intelligent organization — one designed to create situational awareness with tools capable of analyzing disparate data in real or near-real time.

Automation of redundant processes is only the first step. An intelligent organization strategically designs automation to connect disparate systems (e.g., data sources) by enabling users with tools to quickly respond or adjust to threats and opportunities in the business.

Situational awareness is the product of this design. In order to push decision-making deeper into the organization, line staff need the tools and information to respond to change in the business and the flexibility to adjust and mitigate problems within prescribed limits. Likewise, senior executives need near-real time data that provides the means to query performance across different lines of business with confidence and anticipate impacts to singular or enterprise events in order to avoid costly mistakes.

Financial reporting is becoming increasingly complex at the same time finance professionals are being challenged to manage emerging risks, reduce costs, and add value to strategic objectives. These competing mandates require new support tools that deliver intelligence and inspire greater confidence in the numbers.

James Bone

James Bone

Thankfully, a range of new automation tools is now available to help finance professionals achieve better outcomes against this dual mandate. However, to be successful finance executives need a new cognitive framework that anticipates the needs of staff and provides access to the right data in a resilient manner.

This cognitive framework provides finance with a design road map that includes human elements focused on how staff uses technology and simplifying the rollout and implementation of advanced analytical tools.

The framework is composed of five pillars, each designed to complement the others in the implementation of intelligent automation and the development of an intelligent organization:

  1. Cognitive governance
  2. Intentional control design
  3. Business intelligence
  4. Performance management
  5. Situational awareness

Cognitive governance is the driver of intelligent automation as a strategic tool in guiding organizational outcomes. The goal of cognitive governance, as the name implies, is to facilitate the design of intelligent automation to create actionable business intelligence, improve decision-making, and reduce manual processes that lead to poor or uncertain outcomes.

In other words, cognitive governance systematically identifies “blind spots” across the firm then directs intelligent automation to reduce or eliminate the blind spots.

The end game is to create situational awareness at multiple levels of the organization with better tools to understand risks, errors in judgment, and inefficient processes. Human error as a result of decision-making under uncertainty is increasingly recognized as the greatest risk to organizational success. Therefore, it is crucial for senior management create a systemic framework for reducing blind spots in a timely manner. Cognitive governance sets the tone and direction for the other four pillars.

Intentional control design, business intelligence, and performance management are tools for creating situational awareness in response to cognitive governance mandates. A cognitive framework does not require huge investments in the latest big data “shiny objects.” It’s not necessary to spend millions on machine learning or other forms of artificial intelligence. Alternative automation tools for simplifying operations are readily available today, as is access to advanced analytics, for organizations large and small, from a variety of cloud services.

However, for firms that want to use machine learning/AI, a cognitive framework easily integrates any widely used tool or regulatory risk framework. A cognitive framework is focused on a factor that others ignore: how humans interact with and use technology to get their work done most effectively.

Network complexity has been identified as a strategic bottleneck in response times for dealing with cybersecurity risks, cost of technology, and inflexibility in fast-paced business environments. Without a proper framework, improperly designed automation processes may simply add to infrastructure complexity.

There is also a dark side to machine learning/AI that organizations must understand in order to anticipate best use cases and avoid the inevitable missteps that will come with autonomous systems. Microsoft learned a hard lesson with “Clippy,” its Chatbot project, which was shelved when users taught the bot racist remarks. While there are many uses for AI, this technology is still in an experimental stage of growth.

Overly complicated approaches to intelligent automation are the leading cause of failed big data projects. Simplicity is the new value proposition that should be expected from the implementation of technology solutions. Intelligent automation is one tool to accomplish that goal, but execution requires a framework that understands how people use new technology effectively.

Simplicity must be a strategic design imperative based on a framework for creating situational awareness across the enterprise.

James Bone is a cognitive risk consultant; a lecturer at Columbia University’s School of Professional Studies; founder of TheGRCBlueBook.com, an online directory of governance, risk, and compliance tools; and author of, “Cognitive Hack: The New Battleground in Cybersecurity … the Human Mind.”

How to Design an Intelligent Organization

To see the post in CFO magazine click the link above

2017-01-30 by: James Bone Categories: Risk Management Reintroducing TheGRCBlueBook: Business Brochure of Services

cyber-security-pictureYou must be logged in to view this document. Click here to login

TheGRCBlueBook combines risk advisory services with cutting edge research, a knowledge of the GRC marketplace and a platform for GRC solutions providers to educate and showcase their products and services to a global market for risk, audit, compliance and IT professionals seeking cost effective solutions to manage a variety of risks.  Partner with TheGRCBlueBook to help educate corporate buyers about your GRC products and services.

2016-11-29 by: James Bone Categories: Risk Management KPMG: Harnessing the Power of Cognitive Technology to Transform Audit

us-audit-cognitivereport cyber-security-picture

 

 

 

 

 

 

You must be logged in to view this document. Click here to login

Our [KPMG] work as audit professionals is fundamentally about “trust.” For the capital markets to operate effectively and to the benefit of investors and society more broadly, there must be integrity and confidence in the system. In serving the capital markets and the public interest, we work to help instill trust and confidence in the information used to make important decisions.
In the following pages, we begin to explore how we can continue to promote trust during a time of profound change across the business landscape. Given the explosion of data and the digitization of our lives, we want to promote a discussion about how the audit profession must evolve its tools and approach to keep up with the pace of change and remain relevant in a dynamic marketplace. Specifically, our profession must embrace the use of advanced technologies, including data and analytics (D&A), robotics, automation and cognitive intelligence, to manage processes, support planning and inform decision making. At KPMG we are constantly thinking about the development of innovative capabilities and technologies that will enhance quality and strengthen the relevance of our audit into the future.

2016-08-05 by: James Bone Categories: Risk Management Cognitive Risk Framework for Cybersecurity part II

Screen Shot 2016-06-27 at 1.32.42 PMIn part I of Cognitive Risk Framework for Cybersecurity,  I introduced the reasoning for developing a bridge from existing IT and risk frameworks to the next generation of risk management based on cognitive.  These concepts are no longer theoretical and, in fact, are evolving faster than most IT security and risk professionals appreciate. In part II, I introduce the pillars of a cognitive risk framework for cybersecurity that make this program operational.  The pillars represent existing technology and concepts that are increasingly being adopted by technology firms, government agencies, computer scientists and industries as diverse as healthcare, biotechnology, financial services and many others.

The following is an abbreviated version of the cognitive risk framework for cybersecurity that will be published later this year.

A cognitive risk framework is fundamental to the integration of existing internal controls, risk management practice, cognitive security technology and the people who are responsible for executing on the program components that make up enterprise risk management. Cognitive risk fills the missing gap in today cybersecurity program that fails to fully incorporate how to address the “softest target”, the human mind.

A functioning cognitive risk framework for cybersecurity provides guidance for the development of a CogSec response that is three-dimensional instead of a one-dimensional defensive posture. Further, cognitive risk requires an expanded taxonomy to level set expectations about risk management through scientific methods and improve communications about risks. A CRFC is an evolutionary step from intuition and hunches to quantitative analysis and measurement of risks. The first step in the transition to a CRFC is to develop an organizational Cognitive Map. Paul Slovic’s Perception of Risk research is a guide for starting the process to understand how decision-makers across an organization perceive key risks in order to prioritize actionable steps for a range of events large and small. A Cognitive Map is one of many tools risk professionals must use to expand discussions on risk and form agreements for enhanced techniques in cybersecurity.

Risk communications sound very simple on the surface but even risk experts will refer to risks and use the term with different meanings without recognizing the contradictions. In speaking with senior executives at a major bank I was told that she thought the understood risks but the 2008 Great Recession revealed major disagreements in how the firm talked about risk and the decisions made to manage risk. Poor communications about risk are more common than not without a structured way to put risks in context to account for a diversity of risk perceptions. “The fact that the word “risk” has so many different meanings often causes problems in communication, ” according to Slovic.

Organizations rarely openly discuss these differences or even understand they exist until a major risk event forces these issues onto the table. Even then the focus of the discussion quickly pivots to solving the problem with short-term solutions leaving the underlying conflicts unresolved. Slovic, Peters, Finucane and MacGregor (2005) posited that “risk is perceived and acted on it two ways: Risk as Feelings refers to individuals’ fast, instinctive, and intuitive reactions to danger. Risk as Analysis brings logic, reason, and scientific deliberation to bear on risk management.”

Some refer to this exercise as forming a “risk appetite” but again this term is vague and doesn’t fully develop a full range of ways individuals experience risk. Researchers now recognize diverse views of risks as relevant from the nonscientist who views risks subjectively to scientists who evaluate adverse events as the probability and consequences of risks. A deeper view into risk perceptions explains why there is little consensus on the role of risk management and dissatisfaction when expectations are not met.

Techniques for reconciling these differences create a forum that leads to better discussions about risk. Discussions about risk management are extremely important to organizational success yet paradoxically produce discomfort whether in personal or business life when planning for the future. Personal experience in conjunction with a body of research demonstrates that the topic of risk tends to elicit a strong emotional response. Kahneman and Tversky called this response “loss aversion”. “Numerous studies have shown that people feel losses more deeply than gains of the same value (Kahneman and Tversky 1979, Tversky and Kahneman 1991).” Losses have a powerful psychological impact that lingers long after the fact coloring one’s perception about risk taking.

Over time these perceptions about risk and loss become embedded in the unconscious and by virtue of the vagaries of memory the facts and circumstances fade. The natural bias to avoid loss leads us to a fallacy that assumes losses are avoidable if people simply make the right choices. This common view of risk awareness fails to account for uncertainty, the leading cause of surprise, when expectations are not met. This fallacy of perceived risks produces an underestimation or overestimation of the probability of success or failure.

A Cognitive Risk Framework for Cybersecurity, or any other risk, requires a clear understanding and agreement on the role(s) of data management; risk and decision support analytics, parameters for dealing with uncertainty (imperfect information), and how technology is integrated to facilitate the expansion of what Herbert A. Simon called “bounded rationality”. Building a CRFC does not eliminate risks it develops a new kind of intelligence about risk.

The goal of a cognitive risk framework is needed to advance risk management in the same way economists deconstructed the “rational man” theory. The myth of “homo economicus” still lingers in risk management damaging the credibility of the profession.  “Homo economicus, economic man, is a concept in many economic theories portraying humans as consistently rational and narrowly self-interested who usually pursue their subjectively defined ends optimally”.[i] These concepts have since been contrasted with Simon’s bounded rationality; not to mention any number of financial market failures and unethical and fraudulent behavior that stands as evidence to the weakness in the argument. A cognitive risk framework will serve to broaden awareness in the science of cognitive hacks as well as the factors that limit our ability to effectively deal with the Cyber Paradox that go beyond selecting defensive strategy. Let’s take a closer look at what a cognitive risk framework for cybersecurity looks like and consider how to operationalize the program.

The foundational base (“Guiding Principles”) for developing a cognitive risk framework for cybersecurity starts with Slovic’s “Cognitive Map – Perceptions of Risk” and an orientation in Simon’s “Bounded Rationality” and Kahneman and Tversky’s “Prospect Theory – An Analysis of Decision Making Under Risk”. In other words, a cognitive risk framework formally develops a structure for actualizing the two ways people fundamentally perceive adverse events; “risk as feelings” and “risk as analysis”. Each of the following guiding principles is a foundational building block for a more rigorous science-based approach to risk management.

The CRFC guiding principles expand the language of risk with concepts from behavioral science to build a bridge connecting decision science, technology and risk management. The CRFC guiding principles establish a link and recognize the important work undertaken by the COSO Enterprise Risk Framework for Internal Controls, ISO 31000 Risk Management Framework, NIST and ISO/IEC 27001 Information Security standards; which make reference to the need for processes to deal with the human element. The opportunity to extend the cognitive risk framework to other risk programs exists however the focus of this topic is directed on cybersecurity and the program components needed to operationalize its execution. The CRFC program components include five pillars: 1) Intentional Controls Design; 2) Cognitive Informatics Security (Security Informatics); 3) Cognitive Risk Governance; 4) Cybersecurity Intelligence & Active Defense Strategies; and, 5) Legal “Best Efforts” Considerations in Cyberspace.

Brief overview of the Five Pillars of a CRFC:

Intentional Controls Design

Intentional controls design recognizes the importance of trust in networked information systems by advocating for the automation of internal controls design integration for IT, operational, audit and compliance controls. Intentional controls design is the process of embedding information security controls, active monitoring, audit reporting, risk management assessment and operational policy and procedure controls into network information systems through user-guided GUI application design and data repository to enable machine learning, artificial intelligence and other currently available smart system methods.

Intentional controls design is an explicit choice made by information security analysts to reduce or remove reliance on people through the use of automated controls. Automated controls must be animated through the used of machine learning, artificial intelligence algorithms, and other automation based on regulatory guidance and internal policy. Intentional controls design is implemented on two levels of hierarchy: 1) Enterprise level intentional controls design anticipates that these controls are mandatory across the organization and can only be changed or modified by senior executive approval responsible for enterprise governance; 2) Operational level intentional controls design anticipates that each division or business unit may require unique control design to account for lines of business difference in regulatory regimes, risk profile, vendor relationships and other unique to these operations.

Cognitive Informatics Security (Security Informatics)

 Cognitive informatics security is a rapidly evolving discipline within cybersecurity and healthcare with many branches of discipline making it difficult to come up with one definition. Think of cognitive security as an overarching strategy for cybersecurity executed through a variety of advanced computing methodologies.

“Cognitive computing has the ability to tap into and make sense of security data that has previously been dark to an organization’s defenses, enabling security analysts to gain new insights and respond to threats with greater confidence at scale and speed. Cognitive systems are taught, not programmed, using the same types of unstructured information that security analysts rely on.”[i]

The International Journal of Cognitive Informatics and Natural Intelligence defines cognitive informatics as, “ a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain.”[ii]

Cyber Risk Governance

 The Cyber Risk Governance pillar is concerned with the role of the Board of Directors and senior management in strategic planning and executive sponsorship of cybersecurity. Boards of director historically delegate risk and compliance reporting to the Audit Committee although a few forward thinking firms have appointed a senior risk executive who reports directly to the BoD. In order to implement a Cognitive Risk Framework for Cybersecurity the entire board must participate in an orientation of the guiding principles to set the stage and tone for the transformation required to incorporate cognition into a security program.

The framework represents a transformational change in risk management, cybersecurity defense and an understanding of decision-making under uncertainty. To date, traditional risk management has lacked scientific rigor through quantitative analysis and predictive science. The framework dispels myths about risk management while aligning the practice of security and risk management using the best science and technology available today and the future.

Transformational change from an old to a new framework requires leadership from the board and senior management that goes beyond the sponsorship of a few new initiatives. The framework represents a fundamentally new vision for what is possible in risk and security to address cybersecurity or enterprise risk management.   Change is challenging for most organizations however the transformation required to move to a new level of cognition may be the hardest, but most effective, any firm will ever undertake. This is exactly why the board and senior management must understand the framing of decision-making and the psychology of choice. Why, you may ask, must senior management understand what one does naturally and intuitively? The answer is that change is a choice and the process of decision-making among a set of options is not as intuitive or simple as one thinks.

Cybersecurity Intelligence and Defense Strategies

 “Information on its own maybe of utility to the commander, but when related to other information about the operational environment and considered in the light of past experience, it gives rise to a new understanding of the information, which may be termed “intelligence.”[i]

The Cybersecurity Intelligence and Defense Strategies (CIDS) pillar is based on the principles of the 17-member Defense Intelligence and Intelligence community “Joint Intelligence” report. Cybersecurity intelligence is conducted to develop information on four levels – Strategic, Operational, Tactical & Asymmetrical. Strategic intelligence should be developed for the board of directors, senior management and the Cyber Risk Governance committee. Operational intelligence should be designed to provide security professionals with an understanding of threats and operational environment vulnerabilities. Tactical intelligence must provide directional guidance for offensive and defensive security strategies. Asymmetrical intelligence strategies include monitoring the cyber black market and other market intelligence from law enforcement and other means as possible.

CIDS also acts as the laboratory for cybersecurity intelligence responsible for leading the human and technology security practice through a data dependent format to provide rapid response capabilities. Information gathering is the process of providing organizational leadership with context for improved decision-making for current and forward-looking objectives that are key to operational success or to avoid operational failure. Converting information into intelligence requires an organization to develop formal processes, capabilities, analysis, monitoring, and communication channels that enhance its ability to respond appropriately and in a timely manner. Intelligence gathering assumes that the organization has in place objectives for cybersecurity that are well defined through plans of execution and possesses capabilities to respond accordingly to countermeasures (surprise) as well as expected outcomes.

Legal “Best Efforts” Considerations in Cyberspace

 To say that the legal community is struggling with how to address cyberrisks is an understatement on the one hand addressing the protection of their own client’s data and on the other hand determining negligence in an global environment where no organization can ensure against a data breach with 100% certainty. “The ABA Cybersecurity Legal Task Force, chaired by Judy Miller and Harvey Rishikof, is hard at work on the Cyber and Data Security Handbook. The Cyber Incident Response Handbook, which originated with the Task Force.”[i] Law firms have the same challenges as all other organizations but also have a higher standard in their ethical rules that require confidentiality of attorney-client and work product data. I looked to the guidance provided by the ABA to frame the fifth pillar of the CRFC.

The concept of “best efforts” is a contractual term used to obligate the parties to make their best attempt to accomplish a goal, typically used when there is uncertainty about the ability to meet a goal. “Courts have not required that a party under a duty to use best efforts to accomplish a given goal make every available effort to do so, regardless of the harm to it. Some courts have held that the appropriate standard is one of good faith. Black’s Law Dictionary 701 (7th ed. 1999) has defined good faith as “A state of mind consisting in (1) honesty in belief or purpose, (2) faithfulness to one’s duty or obligation, (3) observance of reasonable commercial standards of fair dealing in a given trade or business, or (4) absence of intent to defraud or to seek unconscionable advantage””.[ii]

Boards of director and senior executives are held to these standards by contractual agreement whether aware of these standards or not in the event a breach occurs. The ABA has adopted a security program guide by the Carnegie Mellon University’s Software Engineering Institute. The Carnegie Mellon Enterprise Security Program (ESP) has been tailored for law firms as a prescriptive set of security related activities as well as incident response and ethical considerations. The Carnegie Mellow ESP spells out “some basic activities must be undertaken to establish a security program, no matter which best practice a firm decides to follow. (Note that they are all harmonized and can be adjusted for small firms.) Technical staff will manage most of these activities, but firm partners and staff need to provide critical input. Firm management must define security roles and responsibilities, develop top-level policies and exercise oversight. This means reviewing findings from critical activities; receiving regular reports on intrusions, system usage and compliance with policies and procedures; and reviewing the security plans and budget.”

This is information is not legal guidance to comply with an organization’s best efforts requirements. The information is provided to bring awareness to the importance the board and senior management’s participation to ensure all bases are covered in cyberrisk. The CRFC’s fifth pillar completes the framework as a link to existing standards of information security with an enhanced approach that includes cognitive science.

A cognitive risk framework for cybersecurity represents an opportunity to accelerate advances in cybersecurity and enterprise risk management simultaneously.  A convergence of technology, data science, behavioral research and computing power are no longer wishful thinking about the future.  The future is here but in order to fully harness the power of these technologies and the benefits possible IT security professionals and risk managers, in general, need a guidepost for comprehensive change.  The cognitive risk framework for cybersecurity is the first of many advances that will change how organizations manage risk now and in the future in fundamental and profound ways few have dared to imagine.

[i] http://www.americanbar.org/publications/law_practice_magazine/2013/july-august/cybersecurity-law-firms.html

[ii] http://definitions.uslegal.com/b/best-efforts/

[i] http://fas.org/irp/doddir/dod/jp2_0.pdf

[i] https://securityintelligence.com/cognitive-security-helps-beat-the-bad-guys-at-unprecedented-scale-and-speed/

[ii] http://www.ucalgary.ca/icic/files/icic/2-IJCINI-4101-CI&CC.pdf

[i] https://en.wikipedia.org/wiki/Homo_economicus

2016-01-30 by: James Bone Categories: Risk Management Simplicity Reframed and the Domain of the Fragile

situational awareness iconSimplicity may conjure up thoughts of inner peace and contemplative musings about self-actualization, but that is not the kind of simplicity I am referring to. The concept of simplicity that I think is really interesting is the challenge of making the complex simple. I am referring to the kind of simplicity that Steve Jobs imagined when he changed how we use technology.

Jobs redesigned how our brains interact with technology without our realizing we were participating in a brain hack! The ecosystem Apple created with the Mac and mobile devices via the Apple Store is a stroke of genius and an answer to an interesting problem. How to make technology so simple everyone on the planet can use it right out of the box?

“The reason that Apple is able to create products like the iPad is because we’ve always tried to be at the intersection of technology and the liberal arts. To be able to get the best of both. To make extremely advanced products from a technology point of view, but also have them be intuitive easy-to-use, fun-to-use, so that they really fit the users. The users don’t have to come to them, they come to the user. And it’s the combination of these two things that I think has let us make the kind of creative products like the iPad,” quote from Steve Jobs 2010’s introduction of the iPad.

Supposedly, the idea of a smart phone had been discussed long before Apple created the first iPhone but no one was able to put all the pieces together in the way Jobs did. The lesson from Apple’s success should not be that simplicity is too hard to conceive; instead, we must reframe simplicity as the end goal.

By reframing simplicity as the end goal, Jobs was able to see how multiple devices, such as, the Walkman (remember those?), cameras, and phones could be integrated seamlessly into one device? Jobs showed how a focus on solving the problems that led to the cause of poor customer experience helps create higher profits, customer loyalty and shareholder value; not the other way around. More importantly than the technology, Jobs chose not to control how the devices were used which harnessed yet another eco-system of developers and innovators who shared in Apple’s ascent to become the most profitable company on the planet. In other words, simplification led to organic iterations of new services spawning demand globally for all things Apple.

This raises very interesting questions about how we deal with risks or solve complex problems that appear to be intractable. If complexity is a product of our own design what can we learn from Apple’s lessons in simplicity? You might be surprised that simplicity is a topic being studied and tested in real world scenarios.

About 5 years ago a new website called the SimplicityIndex was created by Siegel+Gale, a global brand strategy, design and experience firm to understand the role simplicity plays in brand awareness and loyalty. The Simplicity Index explains how customers perceive the simplicity of a company’s products and services: Easy to Understand; Transparent and Honest; Making Customers Feel Valued; Innovative and Fresh; and, Useful to Customers. The simplicity attributes of brand leads to measurable benefits in higher profitability, customer loyalty and premium pricing because of the perceived value.

Simplicity can be quantified and measured in real returns to organizations!

The power of simplicity is much bigger than a product strategy! Consider how risk management could be transformed if internal controls and compliance were redesigned to make it simple for employees to get their work done or follow the rules? Simplicity requires that we ask non-intuitive questions such as why must we continue to operate the way we do or what barriers to simplicity exist for customers and employees? Is it time to reconsider how the attributes from the Simplicity Index serve as the end game, not a mission statement with no real strategy of execution?

While you ponder those questions we should also ask why complexity is the norm and not simplicity. There are no simple answers but there are examples from network engineering of how overly complex network security design leads to vulnerabilities in cybersecurity.

The concept of Robust Yet Fragile

Engineers of computer networks are well versed in how complexity builds as well-meaning security professionals add controls and policies in response to threats and weaknesses without considering the impact to network fragility over time.

John Doyle, the John G Braun Professor of Control and Dynamical Systems, Electrical Engineering, and BioEngineering at the California Institute of Technology, introduced the concept of “Robust Yet Fragile” (“RYF”) paradigm to explain the five components of network design used to build a robust system.

Each design component is built ondomain of the fragile the concept of adding robustness to networks to handle today’s evolving business needs. “Reliability is robustness to component failures. Efficiency is robustness to resource scarcity. Scalability is robustness to changes in the size and complexity of the system as a whole. Modularity is robustness to structure component rearrangements. Evolvability is robustness of lineages to changes on longtime scales.

The graph in the above exhibit describes the optimal point of robust network design. “Like all systems of equilibrium, the point at which robust network design leads to unnecessary complexity is the paradox faced by security professionals and systems architects. Systems, such as the Internet, are robust for a single point of failure yet fragile to a targeted attack. As networks bolt on more stuff to build scale the weight of all that stuff becomes more risky,” according to Doyle.

Doyle’s warnings about internet security also applies to enterprise risk management. Does anyone really believe that every employee understands how to operationalize all of the myriad policies and procedures put into effect each year? If so, you may be operating in the Domain of the Fragile and not aware of the vulnerabilities lurking around the corner.

How does an organization reframe simplicity?

The answer to that question is different by industry and organizational culture. A better way to answer the question is to pose new questions for you to consider in your organization. For example, has the cost of critical operational functions increased at a higher rate than the benefits? How difficult is it for management to get timely answers about customer profitability, enterprise risk or financial performance? Are you losing customers because you are difficult to do business with? Are your employees empowered to solve risks on their own or given the tools to improve the customer experience? As you can see the list of possibilities are endless; however, if you are not aware of the answers to these questions you are operating in the Domain of the Fragile.

Board governance is one place where the example of simplicity can be modeled from the top down. Directors have an opportunity to reframe success and reduce risk with a focus on simplicity. Simplicity is not just a focus on Less but a renewed focus on Better. Simplicity is not about doing “more” with “less”; it’s about doing less to achieve more!

As you consider new strategies for 2016 and beyond how you reframe simplicity may be the difference in success or failure for years to come.