iGlobal Forum is pleased to host the next Independent
Sponsors & Capital Providers Dealmakers Meeting on December
4th, 2019, in New York.
Building on iGlobal’s long-standing
success of the Independent Sponsor Summit series and the industry’s largest
gathering, this exclusive new format will consist of 4 hours of only
high-level, sector-specific 1×1 networking meetings between senior-level
capital providers and independent sponsor executives.
These meetings will be dedicated to
developing partnerships unique to your business model and will provide you with
the opportunity to meet exclusively with those independent sponsors or capital
providers specializing in investments in the same sector and market as you are.
We will provide you with a full list of
participating capital providers and independent sponsors prior to the event –
you will have the chance then to evaluate the potential for future business
opportunities and make the most of your time.
Limited to 50 participants only, join
today and meet and network directly with
leading industry professionals and decisionmakers all
under one roof such as: Independent Sponsors (Fundless Sponsors), Private
Equity Firms, Family Offices, Mezzanine Lenders, Hedge Funds, Institutional
Investors, HNWIs (High Net Worth Individuals) and M&A
The five pillars of the cognitive risk framework (CRF) are designed to provide a 3D view of enterprise risks. James Bone details here additional levers of risk governance in the final three pillars of the CRF.
In earlier installments, James discussed the first pillar of the Cognitive Risk Framework (CRF), cognitive governance; the five principles undergirding cognitive governance; and the second pillar, intentional control design.
Intentional design, the second pillar, represents a range of solutions designed to manage risks writ large and small. It begins with a clear set of strategic objectives; leverages empirical, risk-based data; then clarifies optimal outcomes. Simplicity in design is the guiding principle in intentional design. Intentional design leverages cognitive governance’s five principles by applying risk-based data to understand how poor workplace design contributes to inefficiencies and hinders employee performance. And intentional design makes the case for design as a risk lever to create situational awareness and incorporates resilient operational excellence into risk management practice. It is an outcome of the analysis in cognitive governance.
Conversely, the third pillar, intelligence and active defense, is a focus on proactive risk enabled by the first two pillars. The first pillar, cognitive governance, is the driver of solution templates for sustainable outcomes using a multidisciplinary approach to address. The next three pillars are additional levers of risk governance.
In the early stage of adopting enterprise risk management approaches to information security, Chief Information Security Officers (CISOs) have long recognized the importance of data. However, data alone is not intelligence against an adversary who understands that user behavior is the critical path to achieving its objectives.
CISOs wear multiple hats when multitasking may not be the right approach. One of the many challenges in cyber risk is understanding how to wade through voluminous data to defend against adversaries who operate in stealth mode with weapons that evolve at digital speed. CISOs need help, not a new job title.
Advanced technology provides security professionals with the analytical firepower to analyze data across a variety of threat vectors. CISOs understand that technology only creates scale toward achieving a robust toolset to address the full spectrum of evolving threats. Unfortunately, with billions spent on cybersecurity, a commensurate increase in cyber theft continues to outpace security defenses – the cyber paradox continues!
Intentional design, the second pillar, is a lever to facilitate the third pillar, intelligence and active defense strategies. Designing the right solution set for cybersecurity must also address impacts on IT, organizational and leadership resources. The prime target of attack is the human asset as the weakest link in security. CISOs need help separating the myth of assurance from the reality that exists in their network. Internal and external intelligence about the true nature and changing behavior of threat actors is critical for gaining real insights.
A core use case in cybersecurity targets the insider threat while the biggest cause of data breach is human error. IT security needs empirical data that it can rely on instead of conventional wisdom.
The elephant in the room continues to be a lack of credible data.
IT professionals need a variety of analytical methods to better understand what is actually possible with technology and how to support and enable human assets to recognize and address threats more efficiently.
Threat actors understand the dilemma CISOs face, and they have learned to exploit it to their advantage. This is why CISOs and Chief Risk Officers must collaborate to design not only cyber solutions that deal with poor work processes, but also design continuous intelligence to monitor evolving threats without piling on inefficient security processes. Active defense has been developed as a proactive security response to threat actors. Advanced approaches will emerge out of what is learned by using similar and even more effective tactics.
CISOs have slowly adopted active defense, yet these proactive approaches are becoming more common. Active defense is not hacking the hackers, but the process of designing traps, like honey pots, allowing IT professionals to gain intelligence on threat actor’s and mitigate damage in the event of a breach.
Cyber risk is one of the most complicated risks our nation faces due to the asymmetric nature of the actors involved. Cyber risk is a profitable enterprise in the Dark Web, with everyone contractors-for-hire to designers of advanced threats at the top of the food chain. The market for the most effective tools to achieve criminal objectives ensures that each version is enhanced to gain market share. One-dimensional approaches will continue to leave organizations vulnerable to these threats. An open internet and reverse engineering of customer-facing tech ensures the cyber arms race will only accelerate in the Dark Web. Forward-looking security officers seek solutions that address the complexity of the problem with more comprehensive approaches.
Extensive work done by security researchers has demonstrated that many of the attacks are the result of simple vulnerabilities that have been exploited by attacking human behavior. Simple attacks do not imply a lack of sophistication; hackers use deception in an attempt to cover their tracks and obscure attribution. As organizations push forward with new digital business models, a more thoughtful approach is needed to understand security at the intersection of technology and humans in a networked environment with no boundaries.
Pillar 3: Intelligence and Active Defense
This summer I attended a cybersecurity conference and sat in on a demonstration of social engineering. The demonstration included a soundproof booth with a hacker calling a variety of organizations using different personas. The “targets” included every level of the organization; the task was completing a checklist of items that would be used to initiate an attack.
The audience was enthralled at the ease which many of the “targets” agreed to unwittingly assist contestants in the demonstration. Collectively, these approaches are called cognitive hacks, where the target of the hack relies on changing users’ perceptions and behaviors to achieve its objectives.
The demonstration showed how hackers conduct human reconnaissance before an attack is launched. In some of the demonstrations, the “target” was asked to click on links; many complied, enabling the hacker to gain valuable information about the training, defensive strategies and other information tailored to enable an attack on the firm. Prior to the event, contestants developed a dossier on the firms through publicly available information. A few of the “hackers” were not completely successful, but in less than one hour, several contestants showed the audience the simplicity of the approach and the inherent vulnerability of defenses in real time. This is one of many techniques used to get around the myriad cyber defenses at sophisticated organizations.
Deception in the internet is the most effective attack vector, and the target is primarily the human actor.
Pillar three, intelligence and active defense, focuses on the “soft periphery” of human factors. These factors center on the human interaction with technology. As demonstrated above, technology is not needed beyond a phone call. No organization is immune to a data breach when cybersecurity is focused on either detection or prevention using technology.
Defensive and detection strategies, or a combination of approaches, must include the human element. A fourth approach is available that includes a focus on hardening human assets across the firm. Attackers need only to be successful once, while defenders must be successful 100 percent of the time; thus, the asymmetry of the risk.
How does an organization harden the soft periphery of human factors beyond training and awareness? The first step is to recognize there is more to learn about how to address human factors. Technology firms have only begun to explore behavioral analytics solutions using narrow models of behavior. Second, the soft periphery of human factors is a risk-based analysis of gaps in security created by human behavior inside and outside the organization.
Business is conducted in a boundaryless environment that includes a digital trail of forensic behavior that can be weaponized by adroit criminal actors. Defining critical behavioral threats require intelligence that is early-stage; therefore, CISOs must consider how behavior creates fragility in security, then use risk-based approaches to mitigate. Each organization will exhibit different behavioral traits that may lead to vulnerabilities and must be better understood.
As organizations rush to adopt digital strategies, the links between customer and business partner data may unwittingly create fragility in security at the enterprise. Organizations with robust internal security may be surprised to learn how fragile their security profile is when viewed across relationships.
IT professionals need intelligence to assess their robust, yet fragile security posture. Internet of things (IoT), cloud platforms and third-party providers create fragility in security defenses that leave organizations exposed. Organizational culture is also a driver of these behavioral threats, including decision-making under uncertainty.
“A great civilization is not conquered from without until it has destroyed itself from within.”
— Ariel Durant
The third pillar proposes the following proactive approaches as additional levers:
- Active defense
- Cyber and human intelligence analysis
- IT security/compliance automation
- Enhanced cyber hygiene – internal and external human actors
- Cultural behavioral assessment and decision analysis
To save time to cover the remaining pillars, I will not explore these five levers at length; however, I have provided examples in supporting reference for readers to explore on their own. My goal here is to suggest an approach that takes into account the human element in a more comprehensive way. As each pillar is implemented, a three-dimensional picture of risk becomes clear. Intelligence is a design element that adds clarity to the picture.
By way of example, the organizations that fared better than others during the live hack-athon were the ones whose employees practiced strong skepticism and insisted that the caller validate their information with emails, a callback number and a name of a supervisor in the firm. These requests stopped the contestants during the hacker demonstration when the “targets” were persistent in their requests for validating information.
Pillar four is the next lever that deepens insight into enhanced risk governance.
Pillar 4: Cognitive Security and the Human Element
The future of risk governance and decision support will increasingly include the implementation of intelligent informatics. Intelligent informatics is an emerging multidisciplinary approach with real-world solutions in medicine, science, government agencies, technology and industry. These smart systems are being designed to combine computation, cognition and communications to derive new insights to solve complex problems.
We are in the early stage of development of these systems, which include the following burgeoning fields of research:
- machine learning and soft computing;
- data mining and big-data analytics;
- computer vision and pattern recognition; and
- automated reasoning.
The truth is, many of the functions in risk management, compliance, audit and IT security can and will be automated, providing organizations real-time monitoring and analysis 24/7/365, but humans will be needed to decide how to respond.
Advances in automation will both provide new strategic insights not possible by manual processes and also free risk professionals to explore areas of the business that have been inaccessible before. This is not science fiction! Real-life examples exist today, including nurses using clinical decision support systems (CDSS) to improve patient outcomes. An innovation in risk management will be the advent of decision support systems across a range of solutions. These new technologies will allow risk professionals to design solutions that drive decision-making deep into the organization and provide senior management with actionable information about the health of the organization in near real-time.
Risk management technology has rapidly evolved over the last 20 years, from compliance-based applications to integrated modules that automate oversight. GRC applications today will pale in comparison to intelligent informatics that will apply internal and external data pushed to decision-makers at every level of the organization. We are at a crossroads: Organizations are operating with one foot in the 19th century and the other foot racing toward new technology without a roadmap.
Technology solutions that do not improve decision-making at each level of the organization may hinder future growth by adding complexity. A strategic imperative for decision support will be driven by factors associated with cost, competition, product and increased regulatory mandates that challenge organizational resilience. Intelligent informatics will be one of many solutions enabling the levers of the human element.
The human element is the empowerment of every level of the organization by imparting situational awareness into performance, risks, efficiency and decision-making, combined with the ability to adjust and respond in a timely manner. Risk systems are getting smarter and faster, but the real power will only be realized by how well risk professionals learn to leverage these tools to design the solutions needed to help organizations achieve their strategic objectives.
Decision support and situational awareness is the final pillar of a cognitive risk framework. The end product of a cognitive risk framework is the creation of a robust decision support infrastructure that enables situational awareness. A true ERM framework is dynamic, continuously improving and strategic, adding value through actionable intelligence and the capability to respond to a host of threats.
Pillar 5: Decision Support and Situational Awareness
Organizations too often say “everyone owns risk,” but then fail to provide employees with the right tools to manage risks. Risk professionals will continue to be behind the curve without a comprehensive approach for thinking about how to create an infrastructure around decision support and situational awareness.
The five pillars of a cognitive risk framework are designed to provide a roadmap to become a resilient organization. Resiliency will be defined differently by each organization, but the goals inherent in a cognitive risk framework lead to enhanced risk awareness and performance and provides every level of the organization with the right tools to manage their risks within the parameters defined by cognitive governance.
Nimbleness is often cited as an aspirational attribute of a resilient organization; a nimble organization increasingly resembles a technology platform with operational modules designed to scale as needs change. Nineteenth-century organizations are more rigid by design, which reduces their responsiveness to change in comparison to a virtual economy, in which change only requires a few keystrokes. The retail apocalypse is just one of many examples of the transformations to come. A smooth transition to the fourth industrial revolution may depend largely on a digital transformation of the back office and operations.
This dilemma reminds me of a Buddhist saying: “If you meet the Buddha on the road, kill him,” a saying that suggests we need to be able to destroy our most cherished beliefs. We can grow only if we are able to reassess our belief system. To do this, we need to detach ourselves from our beliefs and examine them; if we are wrong, then we must have the mental strength to admit we are wrong, learn and move on.”
Enterprise risk management has become the Buddha, and it is elusive even for the most sophisticated organizations.
A cognitive risk framework builds on traditional ERM approaches to put the human at the center of ERM with the tools to manage complex risks. Isn’t it time to infuse the human element in risk management?
Each of the five pillars have been presented at a minimal level of detail, but there is a tremendous amount of scientific research supporting each of the approaches to achieve a heightened level of maturity for risk governance. A cognitive risk framework for cybersecurity and ERM is the only risk framework based on Nobel-Prize-winning research from Herbert Simon to Dan Kahneman and Paul Slovic, today’s contemporaries of modern risk thinking.
I have referred to cognitive hacks throughout the installments of a cognitive risk framework. Cognitive hacks – which do not require a computer and are based on an attack with the chief objective of changing the behavior of the target to achieve its goals – were first recognized by researchers at the Center for Cognitive Neuroscience at Dartmouth. Hackers have deployed variations of cognitive hacks, such as phishing, social engineering, deception, deepfakes and other methods since the beginning of the internet. The sophistication and deception of these attacks is growing in sophistication, as demonstrated in the 2016 election and continuing into 2020.
Cognitive hacks are a global threat to growth in the fourth industrial revolution. Wholesale destruction of institutional norms of behavior and discourse on the internet are symptoms of the pervasive nature of these attacks. Criminals have weaponized stealth and trust on the internet through intelligence gained by our behavior on the internet. A hidden market of our digital footprint is traded by legitimate and illegitimate players with little to no oversight. Government and business leaders have been caught flat-footed in this cyberwar; IT and risk professionals on the front line of defense lack the resources and tools to effectively prevent the spread of the threat.
Cognitive hacks prey on our subconscious, our biases and heuristics of decision-making. A cognitive risk framework was designed to bring awareness to this growing threat and create informed decision support and situational awareness to counter these threats.
A complete copy of a cognitive risk framework will soon be developed, including a more detailed version in 2020, and more advanced versions of a cognitive risk framework will be developed by others, including myself. I want to thank Corporate Compliance Insights for allowing me to introduce this executive summary.
…coming this fall 2019!
TheGRCBlueBook is launching the first ever GRC Index!
A GRC Index is the first comprehensive research report on technology to manage risk that is unbiased, free of marketing hype and GRC punditry!
TheGRCBlueBook was founded on the premise of bringing transparency to the GRC marketplace. Today’s marketing on GRC solutions has become confusing as a result of excessive marketing jargon, conflicts of interest by GRC pundits and researchers who have never managed risks!
The GRC Index has been created by RISK PROFESSIONALS for RISK PROFESSIONALS because we understand you want to know how technology is used to manage your risks……not some generic solution that doesn’t help you solve your problems.
The attached document is a draft of what will become the largest collection of research on GRC technology solutions to manage a range of risks. Our goal is to become “the” trusted resources for buyers of GRC solutions, buyers of GRC companies and GRC solutions providers who all want unbiased insights on the marketplace for technology to manage risks of all kinds.
The GRC Index will become the largest source of insight into the global market for solutions to manage risks!
Coming this fall from TheGRCBlueBook! – The first GRC Index to give risk professional a real choice for selecting the tools that work best at managing your risks!
James Bone explores cognitive governance, the first pillar of the cognitive risk framework, and the five principles that drive the framework to simplify risk governance, add new rigor to risk assessment and empower every level of the organization with situational awareness to manage risk with the right tools.
The three lines of defense (3LoD), or more specifically, risk governance is being rethought on both sides of the Atlantic.,, A 3LoD model assigns three or more defensive lines of accountability to protect an organization in the same vein as Maginot’s Lines of Defense to defend Verdun. IT security also adopted layered security and controls, but is now evolving to incorporate risk governance approaches. The Maginot Line was considered state of the art for defensive wars fought in trenches, yet vulnerable to offensive change in enemy strategy. Inflexibility in risk practice design and execution is the Achilles’ heel of good risk governance. In order to build risk programs that are responsive to change, we must redesign the solutions we are seeking in risk governance.
A cognitive risk framework clarifies risk governance and provides a pathway for organizations to understand and address risks that matter. There are many reasons 3LoD is perceived to not meet expectations, but a prominent one is unresolved conflicts in perceptions of risk …. the human element.Unresolved conflicts about risk undermine good risk governance, trust and communication.
In Risk Perceptions, Paul Slovic reflected on interpersonal conflicts: “Can an atmosphere of trust and mutual respect be created among opposing parties? How can we design an environment in which effective, multiway communication, constructive debate and compromise take place?”
A cognitive risk framework is designed to find simple solutions to risk management through a focus on empowering the human element. Please keep this perspective in mind as you digest the five principles of cognitive governance.
Principle #1: Risk Governance
Risk governance continues to be a concept that is hard to grasp and elusive to define in concrete terms. Attributes of risk governance such as corporate culture, risk appetite and strategy are assumed outcomes, but what are the right inputs to facilitate these behaviors? Good risk governance is sustainable through simplicity and design. In an attempt to simplify risk governance two inputs are offered: discovery and mitigation.
Risk governance is presented here as two separate and distinct processes:
Risk Assessment (Discovery) and Risk Management (Mitigation)
Risk management is often conflated to include risk assessment, but the skills, tools and responsibility to adequately address these two processes require risk governance to be separate and distinct functions. This may appear to be counterintuitive at first glance, but too narrow a focus on either the mitigation of risk (management) or the discovery of risk (assessment) limit the full spectrum of opportunities to enhance risk governance.
Risk analysis is a continuous process of learning and discovery inclusive of quantitative and qualitative methods that reflect the complexity of risks facing all organizations. Risk analysis should be multidisciplinary in practice, borrowing from a variety of analytical methodologies. For this reason, a specialized team of diverse risk analysts might include data scientists, mathematicians, computer scientists (hackers), network engineers and architects, forensic accountants and other nontraditional disciplines alongside traditional risk professionals. The skill set mix is illustrative, but the design of the team should be driven by senior management to create situational awareness and the tools needed to analyze complex risks. More on this point in future installments.
This approach is not unique or radical. NASA routinely leverages different risk disciplines in preparation for space travel. Wall Street has assimilated physicists from the natural sciences with finance professionals, mathematicians and computer programmers to build risk solutions for their clients and to manage their own risk capital. Examples are plentiful in automotive design, aerospace and other high-risk industries. Success can be designed, but solving complex issues requires human input.
“Risk analysis is a political enterprise as well as a scientific one, and public perceptions of risk play an important role in risk analysis, adding issues of values, process, power and trust to the quantification issues typically considered by risk assessment professionals (Slovic, 1999)”.
Separately, risk management is the responsibility of the board, senior management, audit and compliance. Risk management is equivalent to risk appetite, which is the purview of management to accept or reject. Senior executives are empowered by stakeholders inside and outside the firm to selectively choose among the risks that optimize performance and avoid the risks that hinder. Traditional risk managers are seldom empowered with these dual mandates, and I don’t suggest they should be.
In other words, risk management is the process of selecting among issues of value, power, process and trust in the validation of issues related to risk assessment. To actualize the benefits of sustainable risk governance, advanced risk practice must include expertise in discovery and mitigation. Organizations that develop deep knowledge in both disciplines and master conflicts in perceptions of risk will be better positioned for long-term success.
Experienced risk professionals understand that without the proper tone at the top, even the best risk management programs will fail. Tone at the top implies full engagement by senior executives in the risk management process as laid out in cognitive governance. Developing enhanced risk assessment processes builds confidence in risk-management decisions through greater rigor in risk analysis and recommendations to improve operational efficiency. Risk governance (Principle #1) transforms assurance through perpetual risk-learning.
Principle #2, perceptions of risk, provides an understanding of how to mitigate the conflicts that hurt cognitive governance.
Principle #2: Perceptions of Risk
Risk should be a topic upon which we all agree, but it has become a four-letter word with such divergent meanings that a Google search results in 232 million derivations! The mere mention of climate change, gun control or any number of social or political issues instantly creates a dividing line that is hard, if not impossible, to penetrate. Many of these conflicts are based on deeply held personal and political beliefs that are intractable even in the face of science, data or facts, so how does an organization find common ground?
In discussing this issue with a chief operations officer at a major international bank, I was told, “we thought we understood risk management until the bank almost failed in the 2008 Great Recession.” The truth is, most organizations are reluctant to speak honestly about risks until it is too late or only after a “near miss.” In other words, risk is an abstract concept until we experience it firsthand. As a result, each of us bring our own unique experience of risk into any discussion that involves the possibility of failure. These unresolved conflicts of perceptions of risk create friction in organizations, causing blind spots that expose firms to potential failure, large and small.
But why is perception of risk important?
Each of us bring a different set of personal values and perspectives to the topic of risk. This partly explains why sales people view risks differently than say, accountants; risk is personal and situational to the people and circumstances involved. The vast majority of these conflicting perceptions of risk are well-managed, but many are seldom fully resolved, leading to conflicts that impede performance.
Risk professionals must become attuned to and listen for these conflicts, because they represent signals about risk. Perceptions of risk represent how most people feel about a risk, inclusive of positive or negative outcomes from their own experience. Researchers view risk as probability analysis. Understanding and reconciling these conflicts in perceptions of “risk as feelings” and “risk as analysis” is a low-cost solution that releases the potential for greater performance. Yet the devil in the details can only be fully uncovered through a process of discovery.
Principle #1 (risk governance) acts as a vehicle for learning about risks that enlightens principle #2 (perceptions of risk). Even the most seasoned executive is prone to errors in judgment as complexity grows. However, communications about risk are challenging when we lack agreed-upon procedures to reconcile these conflicts.
Albert Einstein provided a simple explanation:
“Not everything that counts can be counted, and not everything that can be counted counts.”
He knew the difference requires a process that creates an openness to learning.
Principle #1 (risk governance) formalizes continuous learning about risks in order to avoid analysis paralysis in decision-making. Risk governance focuses on building risk intelligence. Principle #2 (perceptions of risk) leverages risk intelligence to fill in the gaps data alone cannot.
Perceptions of risk are complex, because they are seldom expressed through verbal behavior. In other words, how we act under pressure is more powerful than mission statements or even codes of ethics! We say we are safe drivers, but we still text and drive. People take shortcuts when their jobs become too complex, leading to risky behavior. Unknowingly, organizations are incentivizing the wrong behaviors by not fully considering the impacts on human factors.
Surprisingly, cognitive governance means fewer, simple rules instead of more policies and procedures. Risk intelligence narrows the “boil the ocean” approach to risk governance. The vast majority of risk programs spend 85 to 95 percent of 3LoD resources on known risks, leaving the biggest potential exposure, uncertainty, unaddressed.
Again, risk governance is about learning what the organization really values and why.
Organizations must begin to re-design the inputs to risk governance. The common denominator in all organizations is the human element, yet its impact is discounted in risk governance.
Principle #3: Human Element Design
A Ph.D. computer scientist friend from Norway once told me that organizations have a natural rhythm, like a heartbeat, and that cyber criminals understand and leverage this to plan their attacks. Busy, distracted and stressed-out workers are generally more vulnerable to cyberattack. No amount of controls, training, punishment or incentives to prevent phishing attacks or other social engineering schemes is effective in poorly designed work environments, including the C-suite and rank-and-file security professionals.
Cyber criminals understand the human element better than all risk professionals!
Human element design is an innovation in risk governance. Regulators have also begun to include behavioral factors, such as conduct risk, ethics and enhanced governance in regulation, but thus far, the focus is primarily on ensuring good customer outcomes. Sustainable risk governance must consider human factors a tool to increase productivity and reduce risk.,
Human element design is evolving to address correlations and corrective actions in human factors and workplace errors, information security and operational risk.,,,, Principles #1 (risk governance) and #2 (perceptions of risk) assist principle #3 (human element design) in defining areas of opportunity to increase efficient operations and reduce risk in human factors.
Decades of research in human factors in the workplace has led to productivity gains and reductions in operational risk across many industries. We take for granted declining injury rates in the auto and airline industries attributed to human factors design. Simple changes, such as seatbelts and navigation systems in cars and pilot to co-pilot communications during take-offs and landings are just as important — if not more so — as automation and big data projects.
So, why is it important to focus on the human element more broadly now?
The primary reason to focus on the human element now is because technology has become pervasive in everything we do today. Legacy systems, outsourcing, connected devices and networked applications increase complexity and potential risks in the workplace. The internet is built on an engineering concept that is both robust and fragile, meaning users have access to websites around the world, but that access is subject to failure at any connection. Digital transformation extends and expands these new points of fragility, obscuring risk in a cyber void. In the physical world, humans are more aware of risk exposures. In a digital environment, risks are hidden beneath complexity.
Technology has driven productivity gains and prosperity in emerging and developed economies, adding convenience to many parts of our lives; however, cyber risks expose inherent vulnerabilities in cobbled-together systems. Email, social media, third-party partners, mobile devices and now even money move at speeds that increase the possibility for error and reduce our ability to “see” risk exposures that manifest within and beyond our perceptions of risk.
Developers and users of technology must begin to understand how the design and implementation of digital transformation create risk exposures. A “rush to market” mindset has put security on the back burner, leaving users on their own to figure it out instead of making security a market differentiator. Technology developers must begin to collaborate on how security can be made more intuitive for users and tech support. Tech SROs (self-regulatory organizations) are needed to stay ahead of bad actors and government regulation. Users must also understand the limits of technology to solve challenges by building in accommodations for how people work together, share and complete specific tasks.
Instead of adopting simple issues like the insider threat that pale in comparison to the larger issue of the human element, we miss the forest for the blades of grass. The first two principles are designed to support improvements in the human element, but a new risk practice must be developed with the end goals of simplicity, security and efficient operations as products of risk governance.
I will address cognitive hacks separately; these are some of the most sophisticated threats in risk governance and require special treatment.
The human element principle is a focus on designing solutions that address cognitive load, build situational awareness and manage risks at the intersection of the human-to-human and human-to-machine interaction.,, Apple, Amazon, Twitter and others have learned that simplicity works to promote human creativity for growth. Information security and risk governance must become intuitive and seamless to empower the human element.
This topic will be revisited in intentional design, the second pillar, but for now, let’s suffice it to say that a focus on the human element will create a multiplier effect in terms of productivity, growth, new products and services that do not exist today. Each of the five principles are a call to action to think more broadly about risks today and the future.
For now, let’s move on to principle #4, intelligence and modeling.
Principle #4: Intelligence & Modeling
“All models are wrong, but some are useful”
– George Box, Statistician
Box’s warning referred to the inclination to present excessively elaborate models as more correct than simple models. In fact, the opposite is true: Simple approximations of reality may be more useful (e.g., E=MC2). More importantly, Box further warned modelers to understand what is wrong in the model. “It is inappropriate to be concerned about mice when there are tigers abroad (Box 1978).” Expanding on Box’s sentiment, I would add that useful models are not static and may become less useful during a change in circumstances or as new information is presented.
For example, risk matrices have become widely adopted in risk practice and, more recently, in cybersecurity. A risk matrix is a simple tool to rank risks when users do not have the skill or time to perform more in-depth statistical analysis. Unfortunately, risk matrices have been misused by GRC consultants and risk practitioners, creating a false sense of assurance among senior executives. Good risk governance demands more rigor than simple risk matrices.
First, I want to be clear that the business intelligence and data modeling principle is not proposed as a big data project. Big data projects have gotten a bad rap, with conflicting examples of hype about the benefits, as well as humbling outcomes as measured in project success. Principle #4 is about developing structured data governance in order to improve business intelligence for better performance.
Let me give you a simple example: In 2007, prior to the start of the Great Recession, mutual funds had used limited amounts of derivatives to manage risk and boast returns. Wall Street began to increase leverage using derivatives to gain advantage; however, firms relied on manual processes and were unable to easily quantify increased exposure to counterparty risk. A simple question like “what is my total exposure?” took weeks — if not months — to gather and did not include comprehensive answers about impacts to fund performance if specific risk scenarios occurred. We know what happened in 2008, and many of those risks materialized without the risk mitigation needed to offset downside exposure.
Without getting too wonky, manual operational processes for managing collateral and heavy use of spreadsheets and paper contracts slowed the response rate to answer these questions and minimize risk in a more timely manner. Organizations need to understand the strategic questions that matter and create the ability to answer them in minutes, not months. Good risk governance proactively defines strategic questions and refines them as information changes the firm’s risk profile.
Business intelligence and data modeling is an iterative process of experimentation to ask important strategic questions and learn what really matters. I separated the two skill sets because the disciplines are different and the capabilities are specific to each organization., The key point of the intelligence and modeling principle is to incorporate a commitment in risk governance to business intelligence and data modeling, along with the patience to develop the skills needed to support business strategy.
Principle #4 should be designed to better understand business performance, reduce inefficiencies, evaluate security and manage the risks critical to strategy. This is a good place to transition to principle #5, capital structure.
Principle #5: Capital Structure
A firm’s capital structure is one of the key building blocks for long-term success for any viable business, but too often, even well-established organizations stumble (and many fail) for reasons that seem inexplicable.The CFO is often elevated to assume the role of risk manager, and in many firms, staff responsible for risk management report to a CFO; however, upon further analysis, the tools used by CFOs may be too narrow to manage the myriad risks that lead to business failure.
Finance students are well-versed in weighted average cost of capital calculations to achieve the right debt-to-equity mix. Organizations have become adept at managing cash flows, sales, strategy and production during stable market conditions. But how do we explain why so many firms appear to be caught flat-footed during rapid economic change and market disruption? Why is Amazon frequently blamed for causing a “retail apocalypse” in several industries? The true cause may be a pattern of inattentional blindness.
Inattentional blindness is when an individual [or organization] fails to perceive an unexpected stimulus in plain sight. When it becomes impossible to attend to all the stimuli in a given situation, a temporary “blindness” effect can occur, as individuals fail to see unexpected (but often salient) objects or stimuli. In a Harvard Business Review article, “When Good Companies Go Bad,” Donald Sull, Senior Lecturer at the MIT Sloan School, and author Kathleen M. Eisenhardt explain that active inertia is an organization’s tendency to follow established patterns of behavior — even in response to dramatic environmental shifts.
Success reinforces patterns of behavior that become intractable until disruption in the market. According to Sull,
“Organizations get stuck in the modes of thinking that brought success in the past. As market leaders, management simply accelerates all their tried-and-true activities. In trying to dig themselves out of a hole, they just deepen it.”
This may explain why firms spiral into failure, but it doesn’t explain why organizations miss the emergence of competitors or a change in the market in the first place.
Inattentional blindness occurs when firms ignore or fail to develop formal processes that proactively monitor market dynamics for threats to their leadership. Sull and Eisenhardt’s analysis is partially correct in that when firms react, the response is typically half-baked, resulting in damage to capital — or worse, a race to the bottom.
Interestingly, Sull also suggests that an organization’s inability to change extends to legacy relationships with customers, vendors, employees, suppliers and others, creating “shackles” that reinforce the inability to change. Contractual agreements memorialize these relationships and financial obligations, but are rarely revisited after the deals have been completed. Contracts are risk-transfer tools, but indemnification language may be subject to different state laws. How many firms truly understand the risk exposure and financial obligations in legacy contractual agreements? How many firms understand the root cause of financial leakage in contractual language?
Insurance companies are scrambling to mitigate cyber insurance accumulation risks embedded in legacy indemnification agreements.,These hidden risks manifest because organizations lack formal processes to adequately assess legacy obligations, creating inattentional blindness to novel risks. Digital transformation will only accelerate accumulation risks in digital assets.
To summarize, the tools to manage capital do not stop with managing the cost of capital, cash flows and financial obligations. Capital can be put at risk by unanticipated blind spots in which risks and uncertainty are viewed too narrowly.
The first pillar, cognitive governance, is the driver of the next four pillars. The five pillars of a cognitive risk framework represent a new maturity level in enterprise risk management, which I propose to broaden the view of risk governance and build resilience to evolving threats. It is anticipated that more advanced cognitive risk frameworks will be developed by others (including myself) over time.
The treatment of the remaining four pillars will be shorter and focused on mitigating the issues and risks described in cognitive governance. Intentional design is the next pillar to be introduced.
Introducing the Human Element to Risk Management
As posted in Corporate Compliance Insights
As we move into the 4th Industrial Revolution (4IR), risk management is poised to undergo a significant shift. James Bone asks whether traditional risk management is keeping pace. (Hint: it’s not.) What’s really needed is a new approach to thinking about risks.
Framing the Problem
Generally speaking, organizations have one foot firmly planted in the 19th century and the other foot racing toward the future. The World Economic Forum calls this time in history the 4th Industrial Revolution, a $100 trillion opportunity, that represents the next generation of connected devices and autonomous systems needed to fuel a new leg of growth. Every revolution creates disruption, and this one will be no exception, including how risks are managed.
The digital transformation underway is rewriting the rules of engagement., The adoption of digital strategies implies disaggregation of business processes to third-party providers, vendors and data aggregators who collectively increase organizational exposure to potential failure in security and business continuity. Reliance on third parties and sub-vendors extends the distance between customers and service providers, creating a “boundaryless” security environment. Traditional concepts of resiliency are challenged when what is considered a perimeter is as fluid as the disparate service providers cobbled together to serve different purposes. A single service provider may be robust in isolation, but may become fragile during a crisis in connected networks.
Digital transformation is, by design, the act of breaking down boundaries in order to reduce the “friction” of doing business. Automation is enabling speed, efficiency and multilayered products and services, all driven by higher computing power at lower prices. Digital Unicorns, evolving as 10- to 20-year “overnight success stories” give the impression of endless opportunity, and capital returns from early-stage tech firms continue to drive rapid expansion in diverse digital strategies.
Thus far, these risks have been fairly well-managed, with notable exceptions.
Given this rapid change, it is reasonable to ask if risk management is keeping pace as well. A simple case study may clarify the point and raise new questions.
In 2016, the U.S. presidential election ushered in a new risk, a massive cognitive hack. Researchers at Dartmouth University’s Thayer School of Engineering developed the theory of cognitive hacking in 2003, although the technique has been around since the beginning of the internet.
Cognitive hacks are designed to change the behavior and perception of the target of the attack. The use of a computer is optional in a cognitive hack. These hacks have been called phishing or social engineering attacks, but these terms don’t fully explain the diversity of methods involved. Cognitive hacks are cheap, effective and used by nation states and amateurs alike. Generally speaking, “deception” – in defense or offense – on the internet is the least expensive and most effective approach to bypass or enhance security, because humans are the softest target.
In “Cognitive Hack”, one chapter entitled “How to Hack an Election” describes how cognitive hacks have been used in political campaigns around the world to great effect. It is not surprising that it eventually made its way into American politics. The key point is that deception is a real risk that is growing in sophistication and effectiveness.
In researching why information security risks continue to escalate, it became clear that a new framework for assessing risks in a digital environment required a radically new approach to thinking about risks. The escalation of cyber threats against an onslaught of security spending and resources is called the “cyber paradox.” We now know the root cause is the human-machine interaction, but sustainable solutions have been evasive.
Here is what we know…… [Digital] risks thrive in diverse human behavior!
Some behaviors are predictable, but evolve over time. Security methods that focus on behavioral analytics and defense have found success, but are too reactive to provide assurance. One interesting finding noted that a focus on simplicity and good work relations plays a more effective role than technology solutions. A recent 2019 study of cyber resilience found that “infrastructure complexity was a net contributor to risks, while the human elements of role alignment, collaboration, problem resolution and mature leadership played key roles in building cyber resilience.”
In studying the phenomena of how the human element contributes to risk, it became clear that risk professionals in the physical sciences were using these same insights of human behavior and cognition to mitigate risks to personal safety and enable better human performance.
Diverse industries, such as, air travel, automotive, health care, tech and many others have benefited from human element design to improve safety and create sustainable business models. However, the crime-as-a-service (CaaS) model may be the best example of how organized criminals in the dark web work together with the best architects of CaaS products and services, making billions selling to a growing market of buyers.
The International Telecommunications Union (ITU), in publishing its second Global Cybersecurity Index (GCI), noted that approximately 38 percent of countries have a cybersecurity strategy, and 12 percent of countries are considering a strategy to cybersecurity.
The agency said more effort is needed in this critical area, particularly since it conveys that governments consider digital risks high priority. “Cybersecurity is an ecosystem where laws, organizations, skills, cooperation and technical implementation need to be in harmony to be most effective,” stated the report, adding that cybersecurity is “becoming more and more relevant in the minds of countries’ decision-makers.”
Ironically, social networks in the dark web have proven to be more robust than billions in technology spending.
The formation of systemic risks in a broader digital economy will be defined by how well security professionals bridge 19th-century vulnerabilities with next-century business models. Automation will enable the transition, but human behavior will determine the success or failure of the 4th Industrial Revolution.
A broader set of solutions are beyond the scope of this paper, but it will take a coordinated approach to make real progress.
The common denominator in all organizations is the human element, but we lack a formal approach to assess the transition from 19th-century approaches to this new digital environment. Not surprisingly, I am not the first, nor the last to consider the human element in cybersecurity, but I am convinced that the solutions are not purely prescriptive in nature, given the complexity of human behavior.
The assumption is that humans will simply come along like they have so often in the past. Digital transformation will require a more thoughtful and nuanced approach to the human-machine interaction in a boundaryless security environment.
Cognitive hackers from the CIA, NSA and FBI agree that addressing the human element is the most effective approach. A cognitive risk framework is designed to address the human element and enterprise risk management in broader ways than changing employee behavior. A cognitive risk framework is a fundamental shift in thinking about risk management and risk assessment and is ideally suited for the digital economy.
Technology is creating a profound change in how business is conducted. The fragility in these new relationships is concentrated at the human-machine interaction. Email is just one of dozens of iterations of vulnerable endpoints inside and outside of organizations. Advanced analytics will play a critical role in security, but organizational situational awareness will require broader insights.
Recent examples include the 2017 distributed denial of service attack (DDoS) on Dyn, an internet infrastructure company who provides domain name service (DNS) to its customers. A single service provider created unanticipated systemic risks across the East Coast.
DNS provides access to the IP address you plug into your browser.,  A DDoS attack on a DNS provider prevents access to websites. Much of the East Coast was in a panic as the attack slowly spread. This is what happened to Amazon AWS, Twitter, Spotify, GitHub, Etsy, Vox, PayPal, Starbucks, Airbnb, Netflix and Reddit.
These risks are known, but they require complex arrangements that take time. These visible examples of bottlenecks in the network offer opportunity to reduce fragility in the internet; however, resilience on the internet will require trusted partnerships to build robust networks beyond individual relationships.
The collaborative development of the internet is the best example of complete autonomy, robustness and fragility. The 4th Industrial Revolution will require cooperation on security, risk mitigation and shared utilities that benefit the next leg of infrastructure.
Unfortunately, systemic risks are already forming that may threaten free trade in technology as nations begin to plan for and impose restrictions to internet access. A recent Bloomberg article lays bare the global divisions forming regionally as countries rethink an open internet amid political and security concerns.
So, why do we need a cognitive risk framework?
Cognitive risk management is a multidisciplinary focus on human behavior and the factors that enhance or distract from good outcomes. Existing risk frameworks tend to consider the downside of human behavior, but human behavior is not one-dimensional, and neither are the solutions. Paradoxically, cybercriminals are expert at exploiting trust in a digital environment and use a variety of methods [cognitive hacks] to change behavior in order to circumvent information security controls.
A simple answer to why is that cognitive risks are pervasive in all organizations, but too often are ignored until too late or not understood in the context of organizational performance. Cognitive risks are diverse and range from a toxic work environment, workplace bias and decision bias to strategic and organizational failure., ,  More recent research is starting to paint a more vivid picture of the role of human error in the workplace, but much of this research is largely ignored in existing risk practice., , , ,  A cognitive risk framework is needed to address the most challenging risks we face … the human mind!
A cognitive risk framework works just like digital transformation: by breaking down the organizational boundaries that prevent optimal performance and risk reduction.
Redesigning Risk Management for the 4th Industrial Revolution!
The Cognitive Risk Framework for Cybersecurity and Enterprise Risk Management is a first attempt at developing a fluid set of pillars and practices to complement COSO ERM, ISO 31000, NIST and other risk frameworks with the human at the center. Each of the Five Pillars will be explored as a new model for resilience in the era of digital transformation.
It is time to humanize risk management!
A cognitive risk framework has five pillars. Subsequent articles will break down each of the five pillars to demonstrate how each pillar supports the other as the organization develops a more resilient approach to risk management.
The Five Pillars of a Cognitive Risk Framework include:
I. Cognitive Governance
II. Intentional Design
III. Risk Intelligence & Active Defense
IV. Cognitive Security/Human Elements
V. Decision Support (situational awareness)
Lastly, as part of the roll out of a cognitive risk framework, I am conducting research at Columbia University’s School of Professional Studies to better understand advances in risk practice beyond existing risk frameworks. My goal, with your help, is to better understand how risk management practice is evolving across as many risk disciplines as possible. Participants in the survey will be given free access to the final report. An executive summary will be published with the findings. Contact me at firstname.lastname@example.org. Emails will be used only for the purpose of distributing the survey and its findings.
*Correction: The reference to Level 3 Communication experiencing a cyberattack was reported incorrectly. The reference to Level 3 is related to a 2013 outage due to a “failing fiber optic switch” not a cyberattack. Apologies for the incorrect attribution. The purpose of the reference is related to systemic risks in the Internet. James Bone
The GRC Marketplace is expanding globally
The global market for risk technology has rapidly evolved over the last 20 years from single solution providers into platforms with cloud features and advanced analytics. The term “GRC” (governance, risk & compliance) has also undergone a metamorphosis in attempts to describe aspirational solutions that have yet to fully live up to the goals of GRC users.
Terms such as; enterprise risk management, integrated risk management, RegTech, InsureTech, and even FinTech are interchangeably used in a confusing alphabet soup of marketing jargon that fails at providing information about the tools themselves.
Given this change TheGRCBlueBook has sponsored a survey to assess how well GRC solutions are meeting the expectations of the market. The attached survey has some positive and surprising findings as well as opportunities for improvement for GRC solution providers and GRC users as well!
I hope that you find this report interesting and would appreciate any feedback that you would like to offer or comments about this report.
James Bone, Executive Director
… Plus 6 Steps to Enhanced Assurance
The audit profession is facing unprecedented demands, but there are a host of tools available to help. James Bone outlines the benefits to automating audit tasks.
Internal audit is under increasing pressure across many quarters from challenges to audit objectivity, ethical behavior and requests to reduce or modify audit findings. “More than half of North American Chief Audit Executives (CAEs) said they had been directed to omit or modify an important audit finding at least once, and 49 percent said they had been directed not to perform audit work in high-risk areas.” That’s according to a report by The Institute of Internal Auditors (IIA) Research Foundation, based on a survey of 494 CAEs and some follow-up interviews.
Challenges to audit findings are a normal part of the process for clarifying risks associated with weakness in internal controls and gaps that expose the organization to threats. However, the opportunity to reduce subjectivity and improve audit consistency is critical to minimizing second guessing and enhanced credibility. One of the ways to improve audit consistency and objectivity is to reframe the business case for audit automation.
Audit automation provides audit professionals with the tools to reduce focus on low-risk, high-frequency areas of risk. Automation provides a means for detecting changes in low-risk, high-frequency areas of risk to monitor the velocity of high-frequency risks that may lead to increased exposures or development of new risks.
More importantly, the challenges to audit findings associated with low-frequency, high-impact risks (less common) typically deals with an area of uncertainty that is harder to justify without objective data. Uncertainty or “unknown unknowns” are the hardest risks to justify using the subjective point-in-time audit methodology. Uncertainty, by definition, requires statistical and predictive methods that provide auditors with an understanding of the distribution of probabilities, as well as the correlations and degrees of confidence associated with risk. Uncertainty or probability management provides auditors with next-level capabilities to discuss risks that are elusive to nail down. Automation provides internal auditors with the tools to shape the discussion about uncertainty more clearly and to understand the context for when these events become more prevalent.
Risk communications is one of the biggest challenges for all oversight professionals.According to an article in Harvard Business Review,
“We tend to be overconfident about the accuracy of our forecasts and risk assessments and far too narrow in our assessment of the range of outcomes that may occur. Organizational biases also inhibit our ability to discuss risk and failure. In particular, teams facing uncertain conditions often engage in groupthink: Once a course of action has gathered support within a group, those not yet on board tend to suppress their objections — however valid — and fall in line.”
Everyone in the organization has a slightly different perception of risk that is influenced by heuristics developed over a lifetime of experience. Heuristics are mental shortcuts individuals use to make decisions. Most of the time, our heuristics work just fine with the familiar problems we face. Unfortunately, we do not recognize when our biases mislead us in judging more complex risks. In some cases, what appears to be lapses in ethical behavior may simply be normal human bias, which may lead to different perceptions of risk. How does internal audit overcome these challenges?
The Opportunity Cost of Not Automating
Technology is not a solution, in and of itself; it is an enabler of staff to become more effective when integrated strategically to complement staff strengths and enhance areas of opportunity to improve. Automation creates situational awareness of risks. Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness (SA) in audit is not a one-size-fits-all proposition. In some organizations, SA involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real time. Situational awareness reduces human error by making sense of the environment with objective data.
Research is growing demonstrating that human error is the biggest cause of risk in a wide range of organizations, from IT security to health care and organizational performance. The opportunity to reduce human error and to improve insights into operational performance is now possible with automation. Chief Audit Officers have the opportunity to lead in collaboration with operations, finance, compliance and risk management on automation that supports each of the key stakeholders who provide assurance.
Collaboration on automation reduces redundancies for data requests, risk assessments, compliance reviews and demands on IT departments. Smart automation integrates oversight into operations, reduces human error, improves internal controls and creates situational awareness where risks need to be managed. These are the opportunity costs of not automating.
A Pathway to Enhanced Assurance
Audit automation has become a diverse set of solutions offered by a range of providers but that point alone should not drive the decision to automate. Developing a coherent strategy for automation is the key first step. Whether you are a Chief Audit Officer starting to consider automation or you and your team are well-versed in automation platforms, it may be a good time to rethink audit automation, not as a one-off budget item, but as a strategic imperative to be integrated into operations focused on the things that the board and senior executives think are important. This will require the organization to see audit as integral to operational excellence and business intelligence. Reframing the role of audit through automation is the first step toward enhanced assurance.
Auditors are taught to be skeptical while conducting attestation engagements; however, there is no statistical definition for assurance. Assurance requires the use of subjective judgments in the risk assessment process that may lead to variability in the quality of audits between different people within the same audit function. According to ISACA’s IS Audit and Assurance Guideline 2202 Risk Assessment in Planning, Risk Assessment Methodology 2.2.4, “all risk assessment methodologies rely on subjective judgments at some point in the process (e.g., for assigning weights to the various parameters). Professionals should identify the subjective decisions required to use a particular methodology and consider whether these judgments can be made and validated to an appropriate level of accuracy.” Too often these judgments are difficult to validate with a repeatable level of accuracy without quantifiable data and methodology.
Scientific methods are the only proven way to develop degrees of confidence in risk assessment and correlations between cause and effect. “In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.” The only way to adequately reduce the risk of sampling error is to automate sampling data. Trending sample data helps auditors detect seasonality and other factors that occur as a result of the ebb and flow of business dynamics.
A Pathway to Enhanced Assurance
- Identify the greatest opportunities to automate routine audit processes.
- Prioritize automation projects each budget cycle in coordination with operations, risk management, IT and compliance as applicable.
- Prioritize projects that leverage data sources that optimize automation projects across multiple stakeholders (operational data used by multiple stakeholders). One-offs can be integrated over time as needed.
- Develop a secondary list of automation projects that allow for monitoring, business intelligence and confidentiality.
- Design automation projects with levels of security that maintain the integrity of the data based on users and sensitivity of the data.
- Consider the questions most important to senior executives.
“Look, I have got a rule, General Powell ‘As an intelligence officer, your responsibility is to tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think. But you [should] always keep those three separated.”
– Tim Weiner reporting in the New York Times about wisdom former Director of National Intelligence Mike McConnell learned from General Colin Powell
The business case for audit automation has never been stronger given the demands on internal audit. Today, the tools are available to reduce waste, improve assurance, validate audit findings and provide for enhanced audit judgment on the risks that really matter to management and audit professionals.
James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance Week, Corporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.
He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.
Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?
James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.
The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.
Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?
Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?
The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.
Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?
Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.
On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.
The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.
Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?
Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.
The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.
Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.
Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.
When we think of hacking we think of a network being hacked remotely by a computer nerd sitting in a bedroom using code she’s written to steal personal data, money or just to see if it is possible. The idea of a character breaking network security to take control of law enforcement systems has been imprinted in our psyche from images portrayed in TV crime shows however the real story is much more complex and simple in execution.
The idea behind a cognitive hack is simple. Cognitive hack refers to the use of a computer or information system [social media, etc.] to launch a different kind of attack. The sole intent of a cognitive attack relies on its effectiveness to “change human users’ perceptions and corresponding behaviors in order to be successful.” Robert Mueller’s indictment of 13 Russian operatives is an example of a cognitive hack taken to the extreme but demonstrates the effectiveness and subtleties of an attack of this nature.
Mueller’s indictment of an elaborately organized and surprisingly low-cost “troll farm” set up to launch an “information warfare” operation to impact U.S. political elections from Russian soil using social medial platforms is extraordinary and dangerous. The danger of these attacks is only now becoming clear but it is also important to understand the simplicity of a cognitive hack. To be clear, the Russian attack is extraordinary in scope, purpose and effectiveness however these attacks happen every day for much more mundane purposes.
Most of us think of these attacks as email phishing campaigns designed to lure you to click on an unsuspecting link to gain access to your data. Russia’s attack is simply a more elaborate and audacious version to influence what we think, how we vote and foment dissent between political parties and the citizenry of a country. That is what makes Mueller’s detailed indictment even more shocking. Consider for example how TV commercials, advertisers and, yes politicians, have been very effective at using “sound bites” to simplify their product story to appeal to certain target markets. The art of persuasion is a simple way to explain a cognitive hack which is an attack that is focused on the subconscious.
It is instructive to look at the Russian attack rationally from its [Russia’s] perspective in order to objectively consider how this threat can be deployed on a global scale. Instead of spending billions of dollars in a military arms race, countries are becoming armed with the ability to influence the citizens of a country for a few million dollars simply through information warfare. A new more advanced cadre of computer scientists are being groomed to defend and build security for and against these sophisticated attacks. This is simply an old trick disguised in 21st century technology through the use of the internet.
A new playbook has been refined to hack political campaigns and used effectively around the world as documented in an article March, 2016. For more than 10 years, elections in Latin America have become a testing ground for how to hack an election. The drama in the U.S. reads like one episode of a long running soap opera complete with “hackers for hire”, “middle-men”, political conspiracy and sovereign country interference.
“Only amateurs attack machines; professionals target people.”
Now that we know the rules have changed what can be done about this form of cyber-attack? Academics, government researchers and law enforcement have studied this problem for decades but the general public is largely unaware of how pervasive the risk is and the threat it imposes on our society and the next generation of internet users.
I wrote a book, Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind to chronicle this risk and proposed a cognitive risk framework to bring awareness to the problem. Much more is needed to raise awareness by every organization, government official and risk professionals around the world. A new cognitive risk framework is needed to better understand these threats, identify and assess new variants of the attack and develop contingencies rapidly.
Social media has unwittingly become a platform of choice for nation state hackers who can easily hide the identify of organizations and resources involved in these attacks. Social media platforms are largely unregulated and therefore are not required to verify the identity and source of funding to set up and operate these kinds of operations. This may change given the stakes involved.
Just as banks and other financial services firms are required to identify new account owners and their source of funding technology providers of social media sites may also be used as a venue for raising and laundering illicit funds to carry out fraud or attacks on a sovereign state. We now have explicit evidence of the threat this poses to emerging and mature democracies alike.
Regulation is not enough to address an attack this complex and existing training programs have proven to be ineffective. Traditional risk frameworks and security measures are not designed to deal with attacks of this nature. Fortunately, a handful of information security professionals are now considering how to implement new approaches to mitigate the risk of cognitive hacks. The National Institute of Standards and Technology (NIST), is also working on an expansive new training program for information security specialists specifically designed to understand the human element of security yet the public is largely on its own. The knowledge gap is huge and the general public needs more than an easy to remember slogan.
A national debate is needed between industry leaders to tackle security. Silicon Valley and the tech industry, writ large, must also step up and play a leadership role in combatting these attacks by forming self-regulatory consortiums to deal with the diversity and proliferation of cyber threats through vulnerabilities in new technology launches and the development of more secure networking systems. The cost of cyber risk is far exceeding the rate of inflation and will eventually become a drag on corporate earnings and national growth rates as well. Businesses must look beyond the “insider threat” model of security risk and reconsider how the work environment contributes to risk exposure to cyberattacks.
Cognitive risks require a new mental model for understanding “trust” on the internet. Organizations must begin to develop new trust measures for doing business over the internet and with business partners. The idea of security must also be expanded to include more advanced risk assessment methodologies along with a redesign of the human-computer interaction to mitigate cognitive hacks.
Cognitive hacks are asymmetric in nature meaning that the downside of these attacks can significantly outweigh the benefits of risk-taking if not addressed in a timely manner. Because of the asymmetric nature of a cognitive hack attackers seek the easiest route to gain access. Email is one example of a low cost and very effective attack vector which seeks to leverage the digital footprint we leave on the internet.
Imagine a sandy beach where you leave footprints as you walk but instead of the tide erasing your footprints they remain forever present with bits of data about you all along the way. Web accounts, free Wi-Fi networks, mobile phone apps, shopping websites, etc. create a digital profile that may be more public than you realize. Now consider how your employee’s behavior on the internet during work connects back to this digital footprint and you are starting to get an idea of how simple it is for hackers to breach a network.
A cognitive risk framework begins with an assessment of Risk Perceptions related to cyber risks at different levels of the firm. The risk perceptions assessment creates a Cognitive Mapof the organization’s cyber awareness. This is called Cognitive Governance and is the first of five pillars to manage asymmetric risks. The other five pillars are driven from the findings in the cognitive map.
A cognitive map uncovers the blind spots we all experience when a situation at work or on the internet exceeds our experience with how to deal with it successfully. Natural blind spots are used by hackers to deceive us into changing one’s behavior to click a link, a video, a promotional ad or even what we read. Trust, deception and blind spots are just a few of the tools we must incorporate into a new toolkit called the cognitive risk framework.
There is little doubt that Mueller’s investigation into the sources and methods used by the Russians to influence the 2016 election will reveal more surprises but one thing is no longer in doubt…the Russians have a new cognitive weapon that is deniable but still traceable, for now. They are learning from Mueller’s findings and will get better.