Tag Archives: Human-Centered Design

2019-11-22 by: James Bone Categories: Risk Management The 3 Final Pillars of the Cognitive Risk Framework Understanding the Elements of the CRF for Cybersecurity and ERM

The five pillars of the cognitive risk framework (CRF) are designed to provide a 3D view of enterprise risks. James Bone details here additional levers of risk governance in the final three pillars of the CRF.

In earlier installments, James discussed the first pillar of the Cognitive Risk Framework (CRF), cognitive governance; the five principles undergirding cognitive governance; and the second pillar, intentional control design.

Intentional design, the second pillar, represents a range of solutions designed to manage risks writ large and small. It begins with a clear set of strategic objectives; leverages empirical, risk-based data; then clarifies optimal outcomes. Simplicity in design is the guiding principle in intentional design. Intentional design leverages cognitive governance’s five principles by applying risk-based data to understand how poor workplace design contributes to inefficiencies and hinders employee performance. And intentional design makes the case for design as a risk lever to create situational awareness and incorporates resilient operational excellence into risk management practice. It is an outcome of the analysis in cognitive governance.

Conversely, the third pillar, intelligence and active defense, is a focus on proactive risk enabled by the first two pillars. The first pillar, cognitive governance, is the driver of solution templates for sustainable outcomes using a multidisciplinary approach to address. The next three pillars are additional levers of risk governance.

Introduction

In the early stage of adopting enterprise risk management approaches to information security, Chief Information Security Officers (CISOs) have long recognized the importance of data. However, data alone is not intelligence against an adversary who understands that user behavior is the critical path to achieving its objectives.

CISOs wear multiple hats when multitasking may not be the right approach. One of the many challenges in cyber risk is understanding how to wade through voluminous data to defend against adversaries who operate in stealth mode with weapons that evolve at digital speed. CISOs need help, not a new job title.

Advanced technology provides security professionals with the analytical firepower to analyze data across a variety of threat vectors. CISOs understand that technology only creates scale toward achieving a robust toolset to address the full spectrum of evolving threats. Unfortunately, with billions spent on cybersecurity, a commensurate increase in cyber theft continues to outpace security defenses – the cyber paradox continues!

Intentional design, the second pillar, is a lever to facilitate the third pillar, intelligence and active defense strategies. Designing the right solution set for cybersecurity must also address impacts on IT, organizational and leadership resources. The prime target of attack is the human asset as the weakest link in security. CISOs need help separating the myth of assurance from the reality that exists in their network. Internal and external intelligence about the true nature and changing behavior of threat actors is critical for gaining real insights.

A core use case in cybersecurity targets the insider threat while the biggest cause of data breach is human error. IT security needs empirical data that it can rely on instead of conventional wisdom.

The elephant in the room continues to be a lack of credible data.

IT professionals need a variety of analytical methods to better understand what is actually possible with technology and how to support and enable human assets to recognize and address threats more efficiently.

Threat actors understand the dilemma CISOs face, and they have learned to exploit it to their advantage. This is why CISOs and Chief Risk Officers must collaborate to design not only cyber solutions that deal with poor work processes, but also design continuous intelligence to monitor evolving threats without piling on inefficient security processes. Active defense has been developed as a proactive security response to threat actors. Advanced approaches will emerge out of what is learned by using similar and even more effective tactics.

CISOs have slowly adopted active defense, yet these proactive approaches are becoming more common. Active defense is not hacking the hackers, but the process of designing traps, like honey pots, allowing IT professionals to gain intelligence on threat actor’s and mitigate damage in the event of a breach.

Cyber risk is one of the most complicated risks our nation faces due to the asymmetric nature of the actors involved. Cyber risk is a profitable enterprise in the Dark Web, with everyone contractors-for-hire to designers of advanced threats at the top of the food chain. The market for the most effective tools to achieve criminal objectives ensures that each version is enhanced to gain market share. One-dimensional approaches will continue to leave organizations vulnerable to these threats. An open internet and reverse engineering of customer-facing tech ensures the cyber arms race will only accelerate in the Dark Web. Forward-looking security officers seek solutions that address the complexity of the problem with more comprehensive approaches.

Extensive work done by security researchers has demonstrated that many of the attacks are the result of simple vulnerabilities that have been exploited by attacking human behavior. Simple attacks do not imply a lack of sophistication; hackers use deception in an attempt to cover their tracks and obscure attribution. As organizations push forward with new digital business models, a more thoughtful approach is needed to understand security at the intersection of technology and humans in a networked environment with no boundaries.

Pillar 3: Intelligence and Active Defense

This summer I attended a cybersecurity conference and sat in on a demonstration of social engineering. The demonstration included a soundproof booth with a hacker calling a variety of organizations using different personas. The “targets” included every level of the organization; the task was completing a checklist of items that would be used to initiate an attack.

The audience was enthralled at the ease which many of the “targets” agreed to unwittingly assist contestants in the demonstration. Collectively, these approaches are called cognitive hacks, where the target of the hack relies on changing users’ perceptions and behaviors to achieve its objectives.

The demonstration showed how hackers conduct human reconnaissance before an attack is launched. In some of the demonstrations, the “target” was asked to click on links; many complied, enabling the hacker to gain valuable information about the training, defensive strategies and other information tailored to enable an attack on the firm. Prior to the event, contestants developed a dossier on the firms through publicly available information. A few of the “hackers” were not completely successful, but in less than one hour, several contestants showed the audience the simplicity of the approach and the inherent vulnerability of defenses in real time. This is one of many techniques used to get around the myriad cyber defenses at sophisticated organizations.

Deception in the internet is the most effective attack vector, and the target is primarily the human actor.

Pillar three, intelligence and active defense, focuses on the “soft periphery” of human factors. These factors center on the human interaction with technology. As demonstrated above, technology is not needed beyond a phone call. No organization is immune to a data breach when cybersecurity is focused on either detection or prevention using technology.

Defensive and detection strategies, or a combination of approaches, must include the human element. A fourth approach is available that includes a focus on hardening human assets across the firm. Attackers need only to be successful once, while defenders must be successful 100 percent of the time; thus, the asymmetry of the risk.

How does an organization harden the soft periphery of human factors beyond training and awareness? The first step is to recognize there is more to learn about how to address human factors. Technology firms have only begun to explore behavioral analytics solutions using narrow models of behavior. Second, the soft periphery of human factors is a risk-based analysis of gaps in security created by human behavior inside and outside the organization.

Business is conducted in a boundaryless environment that includes a digital trail of forensic behavior that can be weaponized by adroit criminal actors. Defining critical behavioral threats require intelligence that is early-stage; therefore, CISOs must consider how behavior creates fragility in security, then use risk-based approaches to mitigate. Each organization will exhibit different behavioral traits that may lead to vulnerabilities and must be better understood.

As organizations rush to adopt digital strategies, the links between customer and business partner data may unwittingly create fragility in security at the enterprise. Organizations with robust internal security may be surprised to learn how fragile their security profile is when viewed across relationships.

IT professionals need intelligence to assess their robust, yet fragile security posture. Internet of things (IoT), cloud platforms and third-party providers create fragility in security defenses that leave organizations exposed. Organizational culture is also a driver of these behavioral threats, including decision-making under uncertainty.

“A great civilization is not conquered from without until it has destroyed itself from within.”
— Ariel Durant

The third pillar proposes the following proactive approaches as additional levers:

  • Active defense
  • Cyber and human intelligence analysis
  • IT security/compliance automation
  • Enhanced cyber hygiene – internal and external human actors
  • Cultural behavioral assessment and decision analysis

To save time to cover the remaining pillars, I will not explore these five levers at length; however, I have provided examples in supporting reference for readers to explore on their own. My goal here is to suggest an approach that takes into account the human element in a more comprehensive way. As each pillar is implemented, a three-dimensional picture of risk becomes clear. Intelligence is a design element that adds clarity to the picture.

By way of example, the organizations that fared better than others during the live hack-athon were the ones whose employees practiced strong skepticism and insisted that the caller validate their information with emails, a callback number and a name of a supervisor in the firm. These requests stopped the contestants during the hacker demonstration when the “targets” were persistent in their requests for validating information.

Pillar four is the next lever that deepens insight into enhanced risk governance.

Pillar 4: Cognitive Security and the Human Element

The future of risk governance and decision support will increasingly include the implementation of intelligent informatics. Intelligent informatics is an emerging multidisciplinary approach with real-world solutions in medicine, science, government agencies, technology and industry. These smart systems are being designed to combine computation, cognition and communications to derive new insights to solve complex problems.

We are in the early stage of development of these systems, which include the following burgeoning fields of research:

  • machine learning and soft computing;
  • data mining and big-data analytics;
  • computer vision and pattern recognition; and
  • automated reasoning.

The truth is, many of the functions in risk management, compliance, audit and IT security can and will be automated, providing organizations real-time monitoring and analysis 24/7/365, but humans will be needed to decide how to respond.

Advances in automation will both provide new strategic insights not possible by manual processes and also free risk professionals to explore areas of the business that have been inaccessible before. This is not science fiction! Real-life examples exist today, including nurses using clinical decision support systems (CDSS) to improve patient outcomes. An innovation in risk management will be the advent of decision support systems across a range of solutions. These new technologies will allow risk professionals to design solutions that drive decision-making deep into the organization and provide senior management with actionable information about the health of the organization in near real-time.

Risk management technology has rapidly evolved over the last 20 years, from compliance-based applications to integrated modules that automate oversight. GRC applications today will pale in comparison to intelligent informatics that will apply internal and external data pushed to decision-makers at every level of the organization. We are at a crossroads: Organizations are operating with one foot in the 19th century and the other foot racing toward new technology without a roadmap.

Technology solutions that do not improve decision-making at each level of the organization may hinder future growth by adding complexity. A strategic imperative for decision support will be driven by factors associated with cost, competition, product and increased regulatory mandates that challenge organizational resilience. Intelligent informatics will be one of many solutions enabling the levers of the human element.

The human element is the empowerment of every level of the organization by imparting situational awareness into performance, risks, efficiency and decision-making, combined with the ability to adjust and respond in a timely manner.  Risk systems are getting smarter and faster, but the real power will only be realized by how well risk professionals learn to leverage these tools to design the solutions needed to help organizations achieve their strategic objectives.

Decision support and situational awareness is the final pillar of a cognitive risk framework. The end product of a cognitive risk framework is the creation of a robust decision support infrastructure that enables situational awareness. A true ERM framework is dynamic, continuously improving and strategic, adding value through actionable intelligence and the capability to respond to a host of threats.

Pillar 5: Decision Support and Situational Awareness

Organizations too often say “everyone owns risk,” but then fail to provide employees with the right tools to manage risks. Risk professionals will continue to be behind the curve without a comprehensive approach for thinking about how to create an infrastructure around decision support and situational awareness.

The five pillars of a cognitive risk framework are designed to provide a roadmap to become a resilient organization. Resiliency will be defined differently by each organization, but the goals inherent in a cognitive risk framework lead to enhanced risk awareness and performance and provides every level of the organization with the right tools to manage their risks within the parameters defined by cognitive governance.

Nimbleness is often cited as an aspirational attribute of a resilient organization; a nimble organization increasingly resembles a technology platform with operational modules designed to scale as needs change. Nineteenth-century organizations are more rigid by design, which reduces their responsiveness to change in comparison to a virtual economy, in which change only requires a few keystrokes. The retail apocalypse is just one of many examples of the transformations to come. A smooth transition to the fourth industrial revolution may depend largely on a digital transformation of the back office and operations.

This dilemma reminds me of a Buddhist saying: “If you meet the Buddha on the road, kill him,” a saying that suggests we need to be able to destroy our most cherished beliefs. We can grow only if we are able to reassess our belief system. To do this, we need to detach ourselves from our beliefs and examine them; if we are wrong, then we must have the mental strength to admit we are wrong, learn and move on.”

Enterprise risk management has become the Buddha, and it is elusive even for the most sophisticated organizations.

A cognitive risk framework builds on traditional ERM approaches to put the human at the center of ERM with the tools to manage complex risks. Isn’t it time to infuse the human element in risk management?

Each of the five pillars have been presented at a minimal level of detail, but there is a tremendous amount of scientific research supporting each of the approaches to achieve a heightened level of maturity for risk governance. A cognitive risk framework for cybersecurity and ERM is the only risk framework based on Nobel-Prize-winning research from Herbert Simon to Dan Kahneman and Paul Slovic, today’s contemporaries of modern risk thinking.

I have referred to cognitive hacks throughout the installments of a cognitive risk framework. Cognitive hacks – which do not require a computer and are based on an attack with the chief objective of changing the behavior of the target to achieve its goals – were first recognized by researchers at the Center for Cognitive Neuroscience at Dartmouth. Hackers have deployed variations of cognitive hacks, such as phishing, social engineering, deception, deepfakes and other methods since the beginning of the internet. The sophistication and deception of these attacks is growing in sophistication, as demonstrated in the 2016 election and continuing into 2020.

Cognitive hacks are a global threat to growth in the fourth industrial revolution. Wholesale destruction of institutional norms of behavior and discourse on the internet are symptoms of the pervasive nature of these attacks. Criminals have weaponized stealth and trust on the internet through intelligence gained by our behavior on the internet. A hidden market of our digital footprint is traded by legitimate and illegitimate players with little to no oversight. Government and business leaders have been caught flat-footed in this cyberwar; IT and risk professionals on the front line of defense lack the resources and tools to effectively prevent the spread of the threat.

Cognitive hacks prey on our subconscious, our biases and heuristics of decision-making. A cognitive risk framework was designed to bring awareness to this growing threat and create informed decision support and situational awareness to counter these threats.

A complete copy of a cognitive risk framework will soon be developed, including a more detailed version in 2020, and more advanced versions of a cognitive risk framework will be developed by others, including myself. I want to thank Corporate Compliance Insights for allowing me to introduce this executive summary.


CSO: Cyber security culture is a collective effort.

Deloitte: Cultivating a cyber risk-aware culture

Forbes: 5 Ways to Fight Back Against Cybersecurity Attacks: The Power of Active Defense

Tripwire: Active Defense: Proactive Threat Intelligence with Honeypots

Forbes: Russian Hackers Disguised as Iranian Spies Attacked 35 Countries

DARPA: Active Cyber Defense (ACD)

DARPA: Cyber-Hunting at Scale

DARPA: Cyber Assured Systems Engineering (CASE)

Norton: Good cyber hygiene habits to help stay safe online

Digital Guardian: Enterprise Cyber Hygiene Best Practices: Tips & Strategies for Your Business

NCBI: Improving Cultural Competence

The University of Kansas Department of Electrical Engineering and Computer Science: Intelligent Informatics

Canadian Journal of Nursing Informatics: Nursing and Artificial Intelligence

Dartmouth: Center for Cognitive Neuroscience

World Economic Forum: The Fourth Industrial Revolution: what it means, how to respond

2019-06-11 by: James Bone Categories: Risk Management A Cognitive Risk Framework for the 4th Industrial Revolution

Introducing the Human Element to Risk Management

As posted in Corporate Compliance Insights

As we move into the 4th Industrial Revolution (4IR), risk management is poised to undergo a significant shift. James Bone asks whether traditional risk management is keeping pace. (Hint: it’s not.) What’s really needed is a new approach to thinking about risks.

Framing the Problem
Generally speaking, organizations have one foot firmly planted in the 19th century and the other foot racing toward the future. The World Economic Forum calls this time in history the 4th Industrial Revolution, a $100 trillion opportunity, that represents the next generation of connected devices and autonomous systems needed to fuel a new leg of growth. Every revolution creates disruption, and this one will be no exception, including how risks are managed.

The digital transformation underway is rewriting the rules of engagement.[1], The adoption of digital strategies implies disaggregation of business processes to third-party providers, vendors and data aggregators who collectively increase organizational exposure to potential failure in security and business continuity.[2] Reliance on third parties and sub-vendors extends the distance between customers and service providers, creating a “boundaryless” security environment. Traditional concepts of resiliency are challenged when what is considered a perimeter is as fluid as the disparate service providers cobbled together to serve different purposes. A single service provider may be robust in isolation, but may become fragile during a crisis in connected networks.

Digital transformation is, by design, the act of breaking down boundaries in order to reduce the “friction” of doing business. Automation is enabling speed, efficiency and multilayered products and services, all driven by higher computing power at lower prices. Digital Unicorns, evolving as 10- to 20-year “overnight success stories” give the impression of endless opportunity, and capital returns from early-stage tech firms continue to drive rapid expansion in diverse digital strategies.

Thus far, these risks have been fairly well-managed, with notable exceptions.

Given this rapid change, it is reasonable to ask if risk management is keeping pace as well. A simple case study may clarify the point and raise new questions.

In 2016, the U.S. presidential election ushered in a new risk, a massive cognitive hack. Researchers at Dartmouth University’s Thayer School of Engineering developed the theory of cognitive hacking in 2003, although the technique has been around since the beginning of the internet.[3]

Cognitive hacks are designed to change the behavior and perception of the target of the attack. The use of a computer is optional in a cognitive hack. These hacks have been called phishing or social engineering attacks, but these terms don’t fully explain the diversity of methods involved. Cognitive hacks are cheap, effective and used by nation states and amateurs alike. Generally speaking, “deception” – in defense or offense – on the internet is the least expensive and most effective approach to bypass or enhance security, because humans are the softest target.[4]

In “Cognitive Hack”, one chapter entitled “How to Hack an Election” describes how cognitive hacks have been used in political campaigns around the world to great effect.[5] It is not surprising that it eventually made its way into American politics. The key point is that deception is a real risk that is growing in sophistication and effectiveness.[6]

In researching why information security risks continue to escalate, it became clear that a new framework for assessing risks in a digital environment required a radically new approach to thinking about risks. The escalation of cyber threats against an onslaught of security spending and resources is called the “cyber paradox.”[7] We now know the root cause is the human-machine interaction, but sustainable solutions have been evasive.

Here is what we know…… [Digital] risks thrive in diverse human behavior!

Some behaviors are predictable, but evolve over time. Security methods that focus on behavioral analytics and defense have found success, but are too reactive to provide assurance. One interesting finding noted that a focus on simplicity and good work relations plays a more effective role than technology solutions. A recent 2019 study of cyber resilience found that “infrastructure complexity was a net contributor to risks, while the human elements of role alignment, collaboration, problem resolution and mature leadership played key roles in building cyber resilience.”[8]

In studying the phenomena of how the human element contributes to risk, it became clear that risk professionals in the physical sciences were using these same insights of human behavior and cognition to mitigate risks to personal safety and enable better human performance.

Diverse industries, such as, air travel, automotive, health care, tech and many others have benefited from human element design to improve safety and create sustainable business models. However, the crime-as-a-service (CaaS) model may be the best example of how organized criminals in the dark web work together with the best architects of CaaS products and services, making billions selling to a growing market of buyers.

The International Telecommunications Union (ITU), in publishing its second Global Cybersecurity Index (GCI), noted that approximately 38 percent of countries have a cybersecurity strategy, and 12 percent of countries are considering a strategy to cybersecurity.[9]

The agency said more effort is needed in this critical area, particularly since it conveys that governments consider digital risks high priority. “Cybersecurity is an ecosystem where laws, organizations, skills, cooperation and technical implementation need to be in harmony to be most effective,” stated the report, adding that cybersecurity is “becoming more and more relevant in the minds of countries’ decision-makers.”

Ironically, social networks in the dark web have proven to be more robust than billions in technology spending.

The formation of systemic risks in a broader digital economy will be defined by how well security professionals bridge 19th-century vulnerabilities with next-century business models. Automation will enable the transition, but human behavior will determine the success or failure of the 4th Industrial Revolution.

A broader set of solutions are beyond the scope of this paper, but it will take a coordinated approach to make real progress.

The common denominator in all organizations is the human element, but we lack a formal approach to assess the transition from 19th-century approaches to this new digital environment.[10] Not surprisingly, I am not the first, nor the last to consider the human element in cybersecurity, but I am convinced that the solutions are not purely prescriptive in nature, given the complexity of human behavior.

The assumption is that humans will simply come along like they have so often in the past. Digital transformation will require a more thoughtful and nuanced approach to the human-machine interaction in a boundaryless security environment.

Cognitive hackers from the CIA, NSA and FBI agree that addressing the human element is the most effective approach.[11] A cognitive risk framework is designed to address the human element and enterprise risk management in broader ways than changing employee behavior. A cognitive risk framework is a fundamental shift in thinking about risk management and risk assessment and is ideally suited for the digital economy.

Technology is creating a profound change in how business is conducted. The fragility in these new relationships is concentrated at the human-machine interaction. Email is just one of dozens of iterations of vulnerable endpoints inside and outside of organizations. Advanced analytics will play a critical role in security, but organizational situational awareness will require broader insights.

Recent examples include the 2017 distributed denial of service attack (DDoS) on Dyn, an internet infrastructure company who provides domain name service (DNS) to its customers.[12] A single service provider created unanticipated systemic risks across the East Coast.

DNS provides access to the IP address you plug into your browser.[13], [14] A DDoS attack on a DNS provider prevents access to websites. Much of the East Coast was in a panic as the attack slowly spread. This is what happened to Amazon AWS, Twitter, Spotify, GitHub, Etsy, Vox, PayPal, Starbucks, Airbnb, Netflix and Reddit.

These risks are known, but they require complex arrangements that take time. These visible examples of bottlenecks in the network offer opportunity to reduce fragility in the internet; however, resilience on the internet will require trusted partnerships to build robust networks beyond individual relationships.

The collaborative development of the internet is the best example of complete autonomy, robustness and fragility. The 4th Industrial Revolution will require cooperation on security, risk mitigation and shared utilities that benefit the next leg of infrastructure.

Unfortunately, systemic risks are already forming that may threaten free trade in technology as nations begin to plan for and impose restrictions to internet access. A recent Bloomberg article lays bare the global divisions forming regionally as countries rethink an open internet amid political and security concerns.[15]

So, why do we need a cognitive risk framework?
Cognitive risk management is a multidisciplinary focus on human behavior and the factors that enhance or distract from good outcomes. Existing risk frameworks tend to consider the downside of human behavior, but human behavior is not one-dimensional, and neither are the solutions. Paradoxically, cybercriminals are expert at exploiting trust in a digital environment and use a variety of methods [cognitive hacks] to change behavior in order to circumvent information security controls.

A simple answer to why is that cognitive risks are pervasive in all organizations, but too often are ignored until too late or not understood in the context of organizational performance. Cognitive risks are diverse and range from a toxic work environment, workplace bias and decision bias to strategic and organizational failure.[16], [17], [18] More recent research is starting to paint a more vivid picture of the role of human error in the workplace, but much of this research is largely ignored in existing risk practice.[19], [20], [21], [22], [23] A cognitive risk framework is needed to address the most challenging risks we face … the human mind!

A cognitive risk framework works just like digital transformation: by breaking down the organizational boundaries that prevent optimal performance and risk reduction.

Redesigning Risk Management for the 4th Industrial Revolution!
The Cognitive Risk Framework for Cybersecurity and Enterprise Risk Management is a first attempt at developing a fluid set of pillars and practices to complement COSO ERM, ISO 31000, NIST and other risk frameworks with the human at the center. Each of the Five Pillars will be explored as a new model for resilience in the era of digital transformation.

It is time to humanize risk management!

A cognitive risk framework has five pillars. Subsequent articles will break down each of the five pillars to demonstrate how each pillar supports the other as the organization develops a more resilient approach to risk management.

The Five Pillars of a Cognitive Risk Framework include:

I. Cognitive Governance
II. Intentional Design
III. Risk Intelligence & Active Defense
IV. Cognitive Security/Human Elements
V. Decision Support (situational awareness)

Lastly, as part of the roll out of a cognitive risk framework, I am conducting research at Columbia University’s School of Professional Studies to better understand advances in risk practice beyond existing risk frameworks. My goal, with your help, is to better understand how risk management practice is evolving across as many risk disciplines as possible. Participants in the survey will be given free access to the final report. An executive summary will be published with the findings. Contact me at jb4015@columbia.edu. Emails will be used only for the purpose of distributing the survey and its findings.

*Correction: The reference to Level 3 Communication experiencing a cyberattack was reported incorrectly. The reference to Level 3 is related to a 2013 outage due to a “failing fiber optic switch” not a cyberattack.  Apologies for the incorrect attribution. The purpose of the reference is related to systemic risks in the Internet. James Bone

[1] https://robllewellyn.com/10-digital-transformation-risks/

[2] https://www.information-age.com/security-risks-in-digital-transformation-123478326/

[3] http://www.ists.dartmouth.edu/library/301.pdf

[4] https://www.csiac.org/journal-article/cyber-deception/

[5] https://www.amazon.com/Cognitive-Hack-Battleground-Cybersecurity-Internal/dp/149874981X

[6] https://www.csiac.org/journal-article/cyber-deception/

[7] https://www.lawfareblog.com/cyber-paradox-every-offensive-weapon-potential-chink-our-defense-and-vice-versa

[8] https://www.ibm.com/downloads/cas/GAVGOVNV

[9] https://news.un.org/en/story/2017/07/560922-half-all-countries-aware-lacking-national-plan-cybersecurity-un-agency-reports

[10] https://www.humanelementsecurity.com/content/Leadership.aspx

[11] http://aapa.files.cms-plus.com/SeminarPresentations/2016Seminars/2016SecurityIT/Lee%20Black.pdf

[12] https://www.wired.com/2016/10/internet-outage-ddos-dns-dyn/

[13] https://public-dns.info/nameserver/us.html

[14] https://en.wikipedia.org/wiki/List_of_managed_DNS_providers

[15] https://www.bloomberg.com/quicktake/how-u-s-china-tech-rivalry-looks-like-a-digital-cold-war?srnd=premium

[16] https://healthprep.com/articles/mental-health/types-workplace-bullies/?utm_source=google

[17] https://www.forbes.com/sites/amyanderson/2013/06/17/coping-in-a-toxic-work-environment/

[18] https://knowledge.wharton.upenn.edu/article/is-your-workplace-tough-or-is-it-toxic/

[19] https://www.robsonforensic.com/articles/human-error-expert-witness-human-factors

[20] https://rampages.us/srivera/2015/05/24/errors-in-human-inquiry/

[21] https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-283

[22] https://www.jstor.org/stable/1914185?seq=1#page_scan_tab_contents

[23] https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/bounded-rationality/

2019-04-17 by: James Bone Categories: Risk Management Reframing the Business Case for Audit Automation

… Plus 6 Steps to Enhanced Assurance

The audit profession is facing unprecedented demands, but there are a host of tools available to help. James Bone outlines the benefits to automating audit tasks.

Internal audit is under increasing pressure across many quarters from challenges to audit objectivity, ethical behavior and requests to reduce or modify audit findings.[1] “More than half of North American Chief Audit Executives (CAEs) said they had been directed to omit or modify an important audit finding at least once, and 49 percent said they had been directed not to perform audit work in high-risk areas.” That’s according to a report by The Institute of Internal Auditors (IIA) Research Foundation, based on a survey of 494 CAEs and some follow-up interviews.

Challenges to audit findings are a normal part of the process for clarifying risks associated with weakness in internal controls and gaps that expose the organization to threats. However, the opportunity to reduce subjectivity and improve audit consistency is critical to minimizing second guessing and enhanced credibility. One of the ways to improve audit consistency and objectivity is to reframe the business case for audit automation.

Audit automation provides audit professionals with the tools to reduce focus on low-risk, high-frequency areas of risk.  Automation provides a means for detecting changes in low-risk, high-frequency areas of risk to monitor the velocity of high-frequency risks that may lead to increased exposures or development of new risks.

More importantly, the challenges to audit findings associated with low-frequency, high-impact risks (less common) typically deals with an area of uncertainty that is harder to justify without objective data. Uncertainty or “unknown unknowns” are the hardest risks to justify using the subjective point-in-time audit methodology. Uncertainty, by definition, requires statistical and predictive methods that provide auditors with an understanding of the distribution of probabilities, as well as the correlations and degrees of confidence associated with risk. Uncertainty or probability management provides auditors with next-level capabilities to discuss risks that are elusive to nail down. Automation provides internal auditors with the tools to shape the discussion about uncertainty more clearly and to understand the context for when these events become more prevalent. 

Risk communications is one of the biggest challenges for all oversight professionals.[2]According to an article in Harvard Business Review,

“We tend to be overconfident about the accuracy of our forecasts and risk assessments and far too narrow in our assessment of the range of outcomes that may occur. Organizational biases also inhibit our ability to discuss risk and failure. In particular, teams facing uncertain conditions often engage in groupthink: Once a course of action has gathered support within a group, those not yet on board tend to suppress their objections — however valid — and fall in line.”

Everyone in the organization has a slightly different perception of risk that is influenced by heuristics developed over a lifetime of experience. Heuristics are mental shortcuts individuals use to make decisions. Most of the time, our heuristics work just fine with the familiar problems we face. Unfortunately, we do not recognize when our biases mislead us in judging more complex risks. In some cases, what appears to be lapses in ethical behavior may simply be normal human bias, which may lead to different perceptions of risk. How does internal audit overcome these challenges?

The Opportunity Cost of Not Automating

Technology is not a solution, in and of itself; it is an enabler of staff to become more effective when integrated strategically to complement staff strengths and enhance areas of opportunity to improve. Automation creates situational awareness of risks. Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness (SA) in audit is not a one-size-fits-all proposition. In some organizations, SA involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real time. Situational awareness reduces human error by making sense of the environment with objective data.

Research is growing demonstrating that human error is the biggest cause of risk in a wide range of organizations, from IT security to health care and organizational performance.[3][4][5] The opportunity to reduce human error and to improve insights into operational performance is now possible with automation. Chief Audit Officers have the opportunity to lead in collaboration with operations, finance, compliance and risk management on automation that supports each of the key stakeholders who provide assurance.

Collaboration on automation reduces redundancies for data requests, risk assessments, compliance reviews and demands on IT departments. Smart automation integrates oversight into operations, reduces human error, improves internal controls and creates situational awareness where risks need to be managed. These are the opportunity costs of not automating.

A Pathway to Enhanced Assurance

Audit automation has become a diverse set of solutions offered by a range of providers but that point alone should not drive the decision to automate. Developing a coherent strategy for automation is the key first step. Whether you are a Chief Audit Officer starting to consider automation or you and your team are well-versed in automation platforms, it may be a good time to rethink audit automation, not as a one-off budget item, but as a strategic imperative to be integrated into operations focused on the things that the board and senior executives think are important. This will require the organization to see audit as integral to operational excellence and business intelligence. Reframing the role of audit through automation is the first step toward enhanced assurance.

Auditors are taught to be skeptical while conducting attestation engagements; however, there is no statistical definition for assurance. Assurance requires the use of subjective judgments in the risk assessment process that may lead to variability in the quality of audits between different people within the same audit function.[6] According to ISACA’s IS Audit and Assurance Guideline 2202 Risk Assessment in Planning, Risk Assessment Methodology 2.2.4, “all risk assessment methodologies rely on subjective judgments at some point in the process (e.g., for assigning weights to the various parameters). Professionals should identify the subjective decisions required to use a particular methodology and consider whether these judgments can be made and validated to an appropriate level of accuracy.” Too often these judgments are difficult to validate with a repeatable level of accuracy without quantifiable data and methodology. 

Scientific methods are the only proven way to develop degrees of confidence in risk assessment and correlations between cause and effect. “In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.”[7] The only way to adequately reduce the risk of sampling error is to automate sampling data. Trending sample data helps auditors detect seasonality and other factors that occur as a result of the ebb and flow of business dynamics.

A Pathway to Enhanced Assurance

  1. Identify the greatest opportunities to automate routine audit processes.
  2. Prioritize automation projects each budget cycle in coordination with operations, risk management, IT and compliance as applicable.
  3. Prioritize projects that leverage data sources that optimize automation projects across multiple stakeholders (operational data used by multiple stakeholders). One-offs can be integrated over time as needed.
  4. Develop a secondary list of automation projects that allow for monitoring, business intelligence and confidentiality.
  5. Design automation projects with levels of security that maintain the integrity of the data based on users and sensitivity of the data.
  6. Consider the questions most important to senior executives.[8]

“Look, I have got a rule, General Powell ‘As an intelligence officer, your responsibility is to tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think. But you [should] always keep those three separated.”[9]

– Tim Weiner reporting in the New York Times about wisdom former Director of National Intelligence Mike McConnell learned from General Colin Powell

The business case for audit automation has never been stronger given the demands on internal audit. Today, the tools are available to reduce waste, improve assurance, validate audit findings and provide for enhanced audit judgment on the risks that really matter to management and audit professionals.


[1] https://www.journalofaccountancy.com/issues/2015/jun/internal-audit-objectivity.html

[2] https://hbr.org/2012/06/managing-risks-a-new-framework

[3] https://www.cio.com/article/3078572/human-error-biggest-risk-to-health-it.htm

[4] https://hbr.org/2016/09/the-biggest-cybersecurity-threats-are-inside-your-company

[5] https://www.irmi.com/articles/expert-commentary/performance-management-and-the-human-error-factor-a-new-perspective

[6] https://m.isaca.org/Knowledge-Center/ITAF-IS-Assurance-Audit-/IS-Audit-and-Assurance/Documents/2202-Risk-Assessment-in-Planning_gui_Eng_0614.pdf

[7]  Babbie, Earl R. (2013). “The logic of sampling.” The Practice of Social Research (13th ed.). Belmont, CA: Cengage Learning. pp. 185–226. ISBN 978-1-133-04979-1.

[8] https://fas.org/irp/congress/2004_hr/091304powell.html

[9] https://casnocha.com/2007/12/what-you-know-w.html

2019-01-23 by: James Bone Categories: Risk Management Cognitive Hack: The New Battleground In Cybersecurity

James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance WeekCorporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.

He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.


Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?

James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.

The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.

Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?

Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?

The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.

Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?

Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.

On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.

The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.

Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?

Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.

The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.

Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.

Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.

https://www.forbes.com/sites/christopherskroupa/2016/11/21/cognitive-hack-the-new-battleground-in-cybersecurity/#746438ab7f3e

by: James Bone Categories: Risk Management Cognitive Hack: Trust, Deception and Blind Spots

When we think of hacking we think of a network being hacked remotely by a computer nerd sitting in a bedroom using code she’s written to steal personal data, money or just to see if it is possible. The idea of a character breaking network security to take control of law enforcement systems has been imprinted in our psyche from images portrayed in TV crime shows however the real story is much more complex and simple in execution. 

The idea behind a cognitive hack is simple. Cognitive hack refers to the use of a computer or information system [social media, etc.] to launch a different kind of attack. The sole intent of a cognitive attack relies on its effectiveness to “change human users’ perceptions and corresponding behaviors in order to be successful.”[1] Robert Mueller’s indictment of 13 Russian operatives is an example of a cognitive hack taken to the extreme but demonstrates the effectiveness and subtleties of an attack of this nature.[2] 

Mueller’s indictment of an elaborately organized and surprisingly low-cost “troll farm” set up to launch an “information warfare” operation to impact U.S. political elections from Russian soil using social medial platforms is extraordinary and dangerous. The danger of these attacks is only now becoming clear but it is also important to understand the simplicity of a cognitive hack. To be clear, the Russian attack is extraordinary in scope, purpose and effectiveness however these attacks happen every day for much more mundane purposes. 

Most of us think of these attacks as email phishing campaigns designed to lure you to click on an unsuspecting link to gain access to your data. Russia’s attack is simply a more elaborate and audacious version to influence what we think, how we vote and foment dissent between political parties and the citizenry of a country. That is what makes Mueller’s detailed indictment even more shocking.[3] Consider for example how TV commercials, advertisers and, yes politicians, have been very effective at using “sound bites” to simplify their product story to appeal to certain target markets. The art of persuasion is a simple way to explain a cognitive hack which is an attack that is focused on the subconscious. 

It is instructive to look at the Russian attack rationally from its [Russia’s] perspective in order to objectively consider how this threat can be deployed on a global scale. Instead of spending billions of dollars in a military arms race, countries are becoming armed with the ability to influence the citizens of a country for a few million dollars simply through information warfare. A new more advanced cadre of computer scientists are being groomed to defend and build security for and against these sophisticated attacks. This is simply an old trick disguised in 21st century technology through the use of the internet.

A new playbook has been refined to hack political campaigns and used effectively around the world as documented in an article March, 2016. For more than 10 years, elections in Latin America have become a testing ground for how to hack an election. The drama in the U.S. reads like one episode of a long running soap opera complete with “hackers for hire”, “middle-men”, political conspiracy and sovereign country interference. 

“Only amateurs attack machines; professionals target people.”[4]

Now that we know the rules have changed what can be done about this form of cyber-attack? Academics, government researchers and law enforcement have studied this problem for decades but the general public is largely unaware of how pervasive the risk is and the threat it imposes on our society and the next generation of internet users. 

I wrote a book, Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind to chronicle this risk and proposed a cognitive risk framework to bring awareness to the problem. Much more is needed to raise awareness by every organization, government official and risk professionals around the world. A new cognitive risk framework is needed to better understand these threats, identify and assess new variants of the attack and develop contingencies rapidly. 

Social media has unwittingly become a platform of choice for nation state hackers who can easily hide the identify of organizations and resources involved in these attacks. Social media platforms are largely unregulated and therefore are not required to verify the identity and source of funding to set up and operate these kinds of operations. This may change given the stakes involved. 

Just as banks and other financial services firms are required to identify new account owners and their source of funding technology providers of social media sites may also be used as a venue for raising and laundering illicit funds to carry out fraud or attacks on a sovereign state. We now have explicit evidence of the threat this poses to emerging and mature democracies alike.

Regulation is not enough to address an attack this complex and existing training programs have proven to be ineffective. Traditional risk frameworks and security measures are not designed to deal with attacks of this nature. Fortunately, a handful of information security professionals are now considering how to implement new approaches to mitigate the risk of cognitive hacks. The National Institute of Standards and Technology (NIST), is also working on an expansive new training program for information security specialists specifically designed to understand the human element of security yet the public is largely on its own. The knowledge gap is huge and the general public needs more than an easy to remember slogan. 

A national debate is needed between industry leaders to tackle security. Silicon Valley and the tech industry, writ large, must also step up and play a leadership role in combatting these attacks by forming self-regulatory consortiums to deal with the diversity and proliferation of cyber threats through vulnerabilities in new technology launches and the development of more secure networking systems. The cost of cyber risk is far exceeding the rate of inflation and will eventually become a drag on corporate earnings and national growth rates as well. Businesses must look beyond the “insider threat” model of security risk and reconsider how the work environment contributes to risk exposure to cyberattacks. 

Cognitive risks require a new mental model for understanding “trust” on the internet. Organizations must begin to develop new trust measures for doing business over the internet and with business partners. The idea of security must also be expanded to include more advanced risk assessment methodologies along with a redesign of the human-computer interaction to mitigate cognitive hacks.

Cognitive hacks are asymmetric in nature meaning that the downside of these attacks can significantly outweigh the benefits of risk-taking if not addressed in a timely manner. Because of the asymmetric nature of a cognitive hack attackers seek the easiest route to gain access. Email is one example of a low cost and very effective attack vector which seeks to leverage the digital footprint we leave on the internet. 

Imagine a sandy beach where you leave footprints as you walk but instead of the tide erasing your footprints they remain forever present with bits of data about you all along the way. Web accounts, free Wi-Fi networks, mobile phone apps, shopping websites, etc. create a digital profile that may be more public than you realize. Now consider how your employee’s behavior on the internet during work connects back to this digital footprint and you are starting to get an idea of how simple it is for hackers to breach a network.

A cognitive risk framework begins with an assessment of Risk Perceptions related to cyber risks at different levels of the firm. The risk perceptions assessment creates a Cognitive Mapof the organization’s cyber awareness. This is called Cognitive Governance and is the first of five pillars to manage asymmetric risks. The other five pillars are driven from the findings in the cognitive map. 

A cognitive map uncovers the blind spots we all experience when a situation at work or on the internet exceeds our experience with how to deal with it successfully. Natural blind spots are used by hackers to deceive us into changing one’s behavior to click a link, a video, a promotional ad or even what we read. Trust, deception and blind spots are just a few of the tools we must incorporate into a new toolkit called the cognitive risk framework. 

There is little doubt that Mueller’s investigation into the sources and methods used by the Russians to influence the 2016 election will reveal more surprises but one thing is no longer in doubt…the Russians have a new cognitive weapon that is deniable but still traceable, for now. They are learning from Mueller’s findings and will get better. 

Will we?

[1] http://www.ists.dartmouth.edu/library/301.pdf

[2] https://www.bloomberg.com/news/articles/2018-02-17/mueller-deflates-trump-s-claim-that-russia-meddling-was-a-hoax

[3] https://www.scribd.com/document/371673084/Internet-Research-Agency-Indictment#from_embed

[4] https://www.schneier.com/blog/archives/2013/03/phishing_has_go.html

NIST

2017-05-21 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era: Human-Centered Risk Management

Musings of a Cognitive Risk Manager

Before beginning a discussion on human-centered risk it is important to provide context for why we must consider new ways of thinking about risk. The context is important because the change impacting risk management has happened so rapidly we have hardly noticed. If you are under the age of 25 you take for granted the Internet, as we know it today, and the ubiquitous utility of the World Wide Web. Dial-up modems were the norm and desktop computers with “Windows” were rare except in large companies. Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.

What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks. Is it possible that risks and the processes for measuring risk should remain static? Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future? Why are qualitative self-assessments still a common approach for measuring disparate risks? More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?

The constant is human behavior!

Technology has been successful at helping us get more done when and wherever we need to conduct business. The change brought on by innovation has nearly eliminated the separation of our work and personal lives, as a result, businesses and individuals are now exposed to new risks that are harder to understand and measure. The semi-state of hardened enterprise but soft middle has created a paradox in risk management. The paradox of Robust Yet Fragile. Organizations enjoy robust technological capability to network, partner and conduct business 24/7 yet we are more vulnerable or fragile to massive systemic risks. Why are we more fragile?

The Internet is the prototypical example of a complex system that is “scale-free” with a hub-like core structure that makes it robust to random loss of individual nodes yet fragile to targeted attacks on highly connected nodes or hubs. Likewise, large and small corporations are beginning to look more like diverse forms of complex systems with increased dependency on the Internet as a service model and a distributed network of vendors who provide a variety of services no longer deemed critical or cost effective to perform in house.

Collectively, organizations have leveraged complex systems to respond to customer and stakeholder demands to create value, unwittingly, becoming more exposed to fragility at critical junctures. Systemic fragility has been tested during recent denial of service attacks (DDoS) on critical Internet service providers and recent ransomware attacks both which spread with alarming speed. What changed? After each event risk, professionals breathe a sigh of relief and continue pursuing the same strategies that leave organizations vulnerable to massive failure. The Great Recession of 2009 is yet another example of the fragility of complex systems and a tepid response to systemic risks. Do we mistakenly take survival as a sign of a cure to the symptoms of systemic illness?

After more than 20 years of explosive productivity growth the layering of networked systems now pose some of the greatest risks to future growth and security. Inexplicably, productivity has stalled because humans are becoming the bottleneck in infrastructure. Billions of dollars are currently rushing in to finance the next phase of Internet of Things that will extend our vulnerabilities to devices in our homes, our cars, and eventually more. Is it really possible to fully understand these risks with 19th century risk management?

The dawn of the digital economy has resulted in the democratization of content and the disintermediation of past business models in ways unimaginable 20 years ago. I will spare you the boring science behind the limits of human cognition but let’s just say that if you can’t remember what you had for dinner last Wednesday night you are not alone.

But is that enough reason to change your approach to risk management? Not surprisingly, the answer is Yes! Acknowledging that risk managers need better tools to measure more complex and emerging risks should no longer be considered a weakness. It also means that expecting employees to follow, without fail or assistance, the growing complexity of policies, procedures and IT controls required to deal with a myriad of risks may be unrealistic without better tools. 21st century risk management approaches are needed to respond to the new environment in which we now live.

Over the last 30 years, risk management programs have been built “in response” to risk failures in systems, processes and human error. Human-centered risk management starts with the human and redesigns internal controls to optimize the objectives of the organization while reducing risks. This may sound like a subtle difference but it is, in fact, a radically different approach but not a new one.

Human-factors engineers first met in 1955 in Southern California but [its] contributions to safety across diverse industries is now under-appreciated. We don’t give a second thought to the technology that protects us when we travel in our cars, trucks and airlines or undergo complex medical procedures. These advances in risk management did not happen by accident they were designed into the products and services we enjoy today!

Each of these industries recognized that human error posed the greatest risks to the objectives of their respective organizations. Instead of blaming humans however they sought ways to reduce the complexity that leads to human error and found innovative ways to grow their markets while reducing risks. Imagine designing internal controls that are as intuitive as using a cell phone allowing employees to focus on the job at hand instead of being distracted by multitasking! A human-centered risk program looks at the human-machine interaction to understand how the work environment contributes to risk.

I will return to this concept in subsequent papers to explain how the human-machine interaction contributes to risk. For now, let’s suffice it to say that there is sufficient research and empirical data to support the argument. To further explain a human-centered risk approach we must also understand how decision-making is impacted as a result of 19th century risk practices.

Situational awareness is a critical component of human-centered risk management. One’s perception of events and comprehension of their meaning, the projection of their status after events have changed or new data is introduced, and the ability to predict how change impacts outcomes and expectations with clarity facilitate situational awareness. The opportunity in risk management is to improve situational awareness across the enterprise. Enterprise risks are important but they are not all equal and should not be treated the same. Situational awareness helps senior executives understand the difference.

The challenge in most organizations is that situational awareness is assumed as a byproduct of experience and training and seldom revisited when the work environment changes to absorb new products, processes or technology. The failure to understand this vulnerability in risk perception happens at all levels of the organization from the boardroom down to front-line. The vast majority of change introduced in organizations tend to be minor in nature but accumulate over time contributing to a lack of transparency or Inattentional Blindness impacting situational awareness.   This is one of the many reasons organizations are surprised by unanticipated events. We simply cannot see it coming!

Human-centered risk management focuses on designing situational awareness into the work environment from the boardroom down to the shop floor. This multidisciplinary approach requires a new set of tools and cognitive techniques to understand when imperfect information could lead to errors in judgment and decision-making. The principles and processes for designing situational awareness will be discussed in subsequent articles. The goal of human-centered risk management is to design scalable approaches to improve situational awareness across the enterprise.

Human-factors design and situational awareness meet at the “cross roads of technology and the liberal arts” to quote the visionary Steven Jobs. These two factors in human-centered risk management can be achieved by selecting targeted approaches. These approaches will be discussed in more detail in subsequent articles however I invite others to participate in this discussion if you too have an interest in reimagining new approaches to risk management.

James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.