Tag Archives: Cognitive Hack

April 17, 2019 by: James Bone Categories: Risk Management Reframing the Business Case for Audit Automation

… Plus 6 Steps to Enhanced Assurance

The audit profession is facing unprecedented demands, but there are a host of tools available to help. James Bone outlines the benefits to automating audit tasks.

Internal audit is under increasing pressure across many quarters from challenges to audit objectivity, ethical behavior and requests to reduce or modify audit findings.[1] “More than half of North American Chief Audit Executives (CAEs) said they had been directed to omit or modify an important audit finding at least once, and 49 percent said they had been directed not to perform audit work in high-risk areas.” That’s according to a report by The Institute of Internal Auditors (IIA) Research Foundation, based on a survey of 494 CAEs and some follow-up interviews.

Challenges to audit findings are a normal part of the process for clarifying risks associated with weakness in internal controls and gaps that expose the organization to threats. However, the opportunity to reduce subjectivity and improve audit consistency is critical to minimizing second guessing and enhanced credibility. One of the ways to improve audit consistency and objectivity is to reframe the business case for audit automation.

Audit automation provides audit professionals with the tools to reduce focus on low-risk, high-frequency areas of risk.  Automation provides a means for detecting changes in low-risk, high-frequency areas of risk to monitor the velocity of high-frequency risks that may lead to increased exposures or development of new risks.

More importantly, the challenges to audit findings associated with low-frequency, high-impact risks (less common) typically deals with an area of uncertainty that is harder to justify without objective data. Uncertainty or “unknown unknowns” are the hardest risks to justify using the subjective point-in-time audit methodology. Uncertainty, by definition, requires statistical and predictive methods that provide auditors with an understanding of the distribution of probabilities, as well as the correlations and degrees of confidence associated with risk. Uncertainty or probability management provides auditors with next-level capabilities to discuss risks that are elusive to nail down. Automation provides internal auditors with the tools to shape the discussion about uncertainty more clearly and to understand the context for when these events become more prevalent. 

Risk communications is one of the biggest challenges for all oversight professionals.[2]According to an article in Harvard Business Review,

“We tend to be overconfident about the accuracy of our forecasts and risk assessments and far too narrow in our assessment of the range of outcomes that may occur. Organizational biases also inhibit our ability to discuss risk and failure. In particular, teams facing uncertain conditions often engage in groupthink: Once a course of action has gathered support within a group, those not yet on board tend to suppress their objections — however valid — and fall in line.”

Everyone in the organization has a slightly different perception of risk that is influenced by heuristics developed over a lifetime of experience. Heuristics are mental shortcuts individuals use to make decisions. Most of the time, our heuristics work just fine with the familiar problems we face. Unfortunately, we do not recognize when our biases mislead us in judging more complex risks. In some cases, what appears to be lapses in ethical behavior may simply be normal human bias, which may lead to different perceptions of risk. How does internal audit overcome these challenges?

The Opportunity Cost of Not Automating

Technology is not a solution, in and of itself; it is an enabler of staff to become more effective when integrated strategically to complement staff strengths and enhance areas of opportunity to improve. Automation creates situational awareness of risks. Technology solutions that improve situational awareness in audit assurance are ideally the end goal. Situational awareness (SA) in audit is not a one-size-fits-all proposition. In some organizations, SA involves improved data analysis; in others, it may include a range of continuous monitoring and reporting in near real time. Situational awareness reduces human error by making sense of the environment with objective data.

Research is growing demonstrating that human error is the biggest cause of risk in a wide range of organizations, from IT security to health care and organizational performance.[3][4][5] The opportunity to reduce human error and to improve insights into operational performance is now possible with automation. Chief Audit Officers have the opportunity to lead in collaboration with operations, finance, compliance and risk management on automation that supports each of the key stakeholders who provide assurance.

Collaboration on automation reduces redundancies for data requests, risk assessments, compliance reviews and demands on IT departments. Smart automation integrates oversight into operations, reduces human error, improves internal controls and creates situational awareness where risks need to be managed. These are the opportunity costs of not automating.

A Pathway to Enhanced Assurance

Audit automation has become a diverse set of solutions offered by a range of providers but that point alone should not drive the decision to automate. Developing a coherent strategy for automation is the key first step. Whether you are a Chief Audit Officer starting to consider automation or you and your team are well-versed in automation platforms, it may be a good time to rethink audit automation, not as a one-off budget item, but as a strategic imperative to be integrated into operations focused on the things that the board and senior executives think are important. This will require the organization to see audit as integral to operational excellence and business intelligence. Reframing the role of audit through automation is the first step toward enhanced assurance.

Auditors are taught to be skeptical while conducting attestation engagements; however, there is no statistical definition for assurance. Assurance requires the use of subjective judgments in the risk assessment process that may lead to variability in the quality of audits between different people within the same audit function.[6] According to ISACA’s IS Audit and Assurance Guideline 2202 Risk Assessment in Planning, Risk Assessment Methodology 2.2.4, “all risk assessment methodologies rely on subjective judgments at some point in the process (e.g., for assigning weights to the various parameters). Professionals should identify the subjective decisions required to use a particular methodology and consider whether these judgments can be made and validated to an appropriate level of accuracy.” Too often these judgments are difficult to validate with a repeatable level of accuracy without quantifiable data and methodology. 

Scientific methods are the only proven way to develop degrees of confidence in risk assessment and correlations between cause and effect. “In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.”[7] The only way to adequately reduce the risk of sampling error is to automate sampling data. Trending sample data helps auditors detect seasonality and other factors that occur as a result of the ebb and flow of business dynamics.

A Pathway to Enhanced Assurance

  1. Identify the greatest opportunities to automate routine audit processes.
  2. Prioritize automation projects each budget cycle in coordination with operations, risk management, IT and compliance as applicable.
  3. Prioritize projects that leverage data sources that optimize automation projects across multiple stakeholders (operational data used by multiple stakeholders). One-offs can be integrated over time as needed.
  4. Develop a secondary list of automation projects that allow for monitoring, business intelligence and confidentiality.
  5. Design automation projects with levels of security that maintain the integrity of the data based on users and sensitivity of the data.
  6. Consider the questions most important to senior executives.[8]

“Look, I have got a rule, General Powell ‘As an intelligence officer, your responsibility is to tell me what you know. Tell me what you don’t know. Then you’re allowed to tell me what you think. But you [should] always keep those three separated.”[9]

– Tim Weiner reporting in the New York Times about wisdom former Director of National Intelligence Mike McConnell learned from General Colin Powell

The business case for audit automation has never been stronger given the demands on internal audit. Today, the tools are available to reduce waste, improve assurance, validate audit findings and provide for enhanced audit judgment on the risks that really matter to management and audit professionals.


[1] https://www.journalofaccountancy.com/issues/2015/jun/internal-audit-objectivity.html

[2] https://hbr.org/2012/06/managing-risks-a-new-framework

[3] https://www.cio.com/article/3078572/human-error-biggest-risk-to-health-it.htm

[4] https://hbr.org/2016/09/the-biggest-cybersecurity-threats-are-inside-your-company

[5] https://www.irmi.com/articles/expert-commentary/performance-management-and-the-human-error-factor-a-new-perspective

[6] https://m.isaca.org/Knowledge-Center/ITAF-IS-Assurance-Audit-/IS-Audit-and-Assurance/Documents/2202-Risk-Assessment-in-Planning_gui_Eng_0614.pdf

[7]  Babbie, Earl R. (2013). “The logic of sampling.” The Practice of Social Research (13th ed.). Belmont, CA: Cengage Learning. pp. 185–226. ISBN 978-1-133-04979-1.

[8] https://fas.org/irp/congress/2004_hr/091304powell.html

[9] https://casnocha.com/2007/12/what-you-know-w.html

January 23, 2019 by: James Bone Categories: Risk Management Cognitive Hack: The New Battleground In Cybersecurity

James Bone is the author of Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind (Francis and Taylor, 2017) and is a contributing author for Compliance WeekCorporate Compliance Insights, and Life Science Compliance Updates. James is a lecturer at Columbia University’s School of Professional Studies in the Enterprise Risk Management program and consults on ERM practice.

He is the founder and president of Global Compliance Associates, LLC and Executive Director of TheGRCBlueBook. James founded Global Compliance Associates, LLC to create the first cognitive risk management advisory practice. James graduated Drury University with a B.A. in Business Administration, Boston University with M.A. in Management and Harvard University with a M.A. in Business Management, Finance and Risk Management.


Christopher P. Skroupa: What is the thesis of your book Cognitive Hack: The New Battleground in Cybersecurity–The Human Mind and how does it fit in with recent events in cyber security?

James Bone: Cognitive Hack follows two rising narrative arcs in cyber warfare: the rise of the “hacker” as an industry and the “cyber paradox,” namely why billions spent on cyber security fail to make us safe. The backstory of the two narratives reveal a number of contradictions about cyber security, as well as how surprisingly simple it is for hackers to bypass defenses. The cyber battleground has shifted from an attack on hard assets to a much softer target: the human mind. If human behavior is the new and last “weakest link” in the cyber security armor, is it possible to build cognitive defenses at the intersection of human-machine interactions? The answer is yes, but the change that is needed requires a new way of thinking about security, data governance and strategy. The two arcs meet at the crossroads of data intelligence, deception and a reframing of security around cognitive strategies.

The purpose of Cognitive Hack is to look not only at the digital footprint left behind from cyber threats, but to go further—behind the scenes, so to speak—to understand the events leading up to the breach. Stories, like data, may not be exhaustive, but they do help to paint in the details left out. The challenge is finding new information buried just below the surface that might reveal a fresh perspective. The book explores recent events taken from today’s headlines to serve as the basis for providing context and insight into these two questions.

Skroupa: IoT has been highly scrutinized as having the potential to both increase technological efficiency and broaden our cyber vulnerabilities. Do you believe the risks outweigh the rewards? Why?

Bone: The recent Internet outage in October of this year is a perfect example of the risks of the power and stealth of IoT. What many are not aware of is that hackers have been experimenting with IoT attacks in increasingly more complex and potentially damaging ways. The TOR Network, used in the Dark Web to provide legitimate and illegitimate users anonymity, was almost taken down by an IoT attack. Security researchers have been warning of other examples of connected smart devices being used to launch DDoS attacks that have not garnered media attention. As the number of smart devices spread, the threat only grows. The anonymous attacker in October is said to have only used 100,000 devices. Imagine what could be done with one billion devices as manufacturers globally export them, creating a new network of insecure connections with little to no security in place to detect, correct or prevent hackers from launching attacks from anywhere in the world?

The question of weighing the risks versus the rewards is an appropriate one. Consider this: The federal government has standards for regulating the food we eat, the drugs we take, the cars we drive and a host of other consumer goods and services, but the single most important tool the world increasingly depends on has no gatekeeper to ensure that the products and services connected to the Internet don’t endanger national security or pose a risk to its users. At a minimum, manufacturers of IoT must put measures in place to detect these threats, disable IoT devices once an attack starts and communicate the risks of IoT more transparently. Lastly, the legal community has also not kept pace with the development of IoT, however this is an area that will be ripe for class action lawsuits in the near future.

Skroupa: What emerging trends in cyber security can we anticipate from the increasing commonality of IoT?

Bone: Cyber crime has grown into a thriving black market complete with active buyers and sellers, independent contractors and major players who, collectively, have developed a mature economy of products, services, and shared skills, creating a dynamic laboratory of increasingly powerful cyber tools unimaginable before now. On the other side, cyber defense strategies have not kept pace even as costs continue to skyrocket amid asymmetric and opportunistic attacks. However, a few silver linings are starting to emerge around a cross-disciplinary science called Cognitive Security (CogSec), Intelligence and Security Informatics (ISI) programs, Deception Defense, and a framework of Cognitive Risk Management for cyber security.

On the other hand, the job description of “hacker” is evolving rapidly with some wearing “white hats,” some with “black hats” and still others with “grey hats.” Countries around the world are developing cyber talent with complex skills to build or break security defenses using easily shared custom tools.

The implications of the rise of the hacker as a community and an industry will have long-term ramifications to our economy and national security that deserve more attention otherwise the unintended consequences could be significant. In the same light, the book looks at the opportunity and challenge of building trust into networked systems. Building trust in networks is not a new concept but is too often a secondary or tertiary consideration as systems designers are forced to rush to market products and services to capture market share leaving security considerations to corporate buyers. IoT is a great example of this challenge.

Skroupa: Could you briefly describe the new Cognitive Risk Framework you’ve proposed in your book as a cyber security strategy?

Bone: First of all, this is the first cognitive risk framework designed for enterprise risk management of its kind. The Cognitive Risk Framework for Cyber security (CRFC) is an overarching risk framework that integrates technology and behavioral science to create novel approaches in internal controls design that act as countermeasures lowering the risk of cognitive hacks. The framework has targeted cognitive hacks as a primary attack vector because of the high success rate of these attacks and the overall volume of cognitive hacks versus more conventional threats. The cognitive risk framework is a fundamental redesign of enterprise risk management and internal controls design for cyber security but is equally relevant for managing risks of any kind.

The concepts referenced in the CRFC are drawn from a large body of research in multidisciplinary topics. Cognitive risk management is a sister discipline of a parallel body of science called Cognitive Informatics Security or CogSec. It is also important to point out as the creator of the CRFC, the principles and practices prescribed herein are borrowed from cognitive informatics security, machine learning, artificial intelligence (AI), and behavioral and cognitive science, among just a few that are still evolving. The Cognitive Risk Framework for Cyber security revolves around five pillars: Intentional Controls Design, Cognitive Informatics Security, Cognitive Risk Governance, Cyber security Intelligence and Active Defense Strategies and Legal “Best Efforts” considerations in Cyberspace.

Many organizations are doing some aspect of a “cogrisk” program but haven’t formulated a complete framework; others have not even considered the possibility; and still others are on the path toward a functioning framework influenced by management. The Cognitive Risk Framework for Cybersecurity is in response to an interim process of transitioning to a new level of business operations (cognitive computing) informed by better intelligence to solve the problems that hinder growth.

Christopher P. Skroupa is the founder and CEO of Skytop Strategies, a global organizer of conferences.

https://www.forbes.com/sites/christopherskroupa/2016/11/21/cognitive-hack-the-new-battleground-in-cybersecurity/#746438ab7f3e

by: James Bone Categories: Risk Management Cognitive Hack: Trust, Deception and Blind Spots

When we think of hacking we think of a network being hacked remotely by a computer nerd sitting in a bedroom using code she’s written to steal personal data, money or just to see if it is possible. The idea of a character breaking network security to take control of law enforcement systems has been imprinted in our psyche from images portrayed in TV crime shows however the real story is much more complex and simple in execution. 

The idea behind a cognitive hack is simple. Cognitive hack refers to the use of a computer or information system [social media, etc.] to launch a different kind of attack. The sole intent of a cognitive attack relies on its effectiveness to “change human users’ perceptions and corresponding behaviors in order to be successful.”[1] Robert Mueller’s indictment of 13 Russian operatives is an example of a cognitive hack taken to the extreme but demonstrates the effectiveness and subtleties of an attack of this nature.[2] 

Mueller’s indictment of an elaborately organized and surprisingly low-cost “troll farm” set up to launch an “information warfare” operation to impact U.S. political elections from Russian soil using social medial platforms is extraordinary and dangerous. The danger of these attacks is only now becoming clear but it is also important to understand the simplicity of a cognitive hack. To be clear, the Russian attack is extraordinary in scope, purpose and effectiveness however these attacks happen every day for much more mundane purposes. 

Most of us think of these attacks as email phishing campaigns designed to lure you to click on an unsuspecting link to gain access to your data. Russia’s attack is simply a more elaborate and audacious version to influence what we think, how we vote and foment dissent between political parties and the citizenry of a country. That is what makes Mueller’s detailed indictment even more shocking.[3] Consider for example how TV commercials, advertisers and, yes politicians, have been very effective at using “sound bites” to simplify their product story to appeal to certain target markets. The art of persuasion is a simple way to explain a cognitive hack which is an attack that is focused on the subconscious. 

It is instructive to look at the Russian attack rationally from its [Russia’s] perspective in order to objectively consider how this threat can be deployed on a global scale. Instead of spending billions of dollars in a military arms race, countries are becoming armed with the ability to influence the citizens of a country for a few million dollars simply through information warfare. A new more advanced cadre of computer scientists are being groomed to defend and build security for and against these sophisticated attacks. This is simply an old trick disguised in 21st century technology through the use of the internet.

A new playbook has been refined to hack political campaigns and used effectively around the world as documented in an article March, 2016. For more than 10 years, elections in Latin America have become a testing ground for how to hack an election. The drama in the U.S. reads like one episode of a long running soap opera complete with “hackers for hire”, “middle-men”, political conspiracy and sovereign country interference. 

“Only amateurs attack machines; professionals target people.”[4]

Now that we know the rules have changed what can be done about this form of cyber-attack? Academics, government researchers and law enforcement have studied this problem for decades but the general public is largely unaware of how pervasive the risk is and the threat it imposes on our society and the next generation of internet users. 

I wrote a book, Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind to chronicle this risk and proposed a cognitive risk framework to bring awareness to the problem. Much more is needed to raise awareness by every organization, government official and risk professionals around the world. A new cognitive risk framework is needed to better understand these threats, identify and assess new variants of the attack and develop contingencies rapidly. 

Social media has unwittingly become a platform of choice for nation state hackers who can easily hide the identify of organizations and resources involved in these attacks. Social media platforms are largely unregulated and therefore are not required to verify the identity and source of funding to set up and operate these kinds of operations. This may change given the stakes involved. 

Just as banks and other financial services firms are required to identify new account owners and their source of funding technology providers of social media sites may also be used as a venue for raising and laundering illicit funds to carry out fraud or attacks on a sovereign state. We now have explicit evidence of the threat this poses to emerging and mature democracies alike.

Regulation is not enough to address an attack this complex and existing training programs have proven to be ineffective. Traditional risk frameworks and security measures are not designed to deal with attacks of this nature. Fortunately, a handful of information security professionals are now considering how to implement new approaches to mitigate the risk of cognitive hacks. The National Institute of Standards and Technology (NIST), is also working on an expansive new training program for information security specialists specifically designed to understand the human element of security yet the public is largely on its own. The knowledge gap is huge and the general public needs more than an easy to remember slogan. 

A national debate is needed between industry leaders to tackle security. Silicon Valley and the tech industry, writ large, must also step up and play a leadership role in combatting these attacks by forming self-regulatory consortiums to deal with the diversity and proliferation of cyber threats through vulnerabilities in new technology launches and the development of more secure networking systems. The cost of cyber risk is far exceeding the rate of inflation and will eventually become a drag on corporate earnings and national growth rates as well. Businesses must look beyond the “insider threat” model of security risk and reconsider how the work environment contributes to risk exposure to cyberattacks. 

Cognitive risks require a new mental model for understanding “trust” on the internet. Organizations must begin to develop new trust measures for doing business over the internet and with business partners. The idea of security must also be expanded to include more advanced risk assessment methodologies along with a redesign of the human-computer interaction to mitigate cognitive hacks.

Cognitive hacks are asymmetric in nature meaning that the downside of these attacks can significantly outweigh the benefits of risk-taking if not addressed in a timely manner. Because of the asymmetric nature of a cognitive hack attackers seek the easiest route to gain access. Email is one example of a low cost and very effective attack vector which seeks to leverage the digital footprint we leave on the internet. 

Imagine a sandy beach where you leave footprints as you walk but instead of the tide erasing your footprints they remain forever present with bits of data about you all along the way. Web accounts, free Wi-Fi networks, mobile phone apps, shopping websites, etc. create a digital profile that may be more public than you realize. Now consider how your employee’s behavior on the internet during work connects back to this digital footprint and you are starting to get an idea of how simple it is for hackers to breach a network.

A cognitive risk framework begins with an assessment of Risk Perceptions related to cyber risks at different levels of the firm. The risk perceptions assessment creates a Cognitive Mapof the organization’s cyber awareness. This is called Cognitive Governance and is the first of five pillars to manage asymmetric risks. The other five pillars are driven from the findings in the cognitive map. 

A cognitive map uncovers the blind spots we all experience when a situation at work or on the internet exceeds our experience with how to deal with it successfully. Natural blind spots are used by hackers to deceive us into changing one’s behavior to click a link, a video, a promotional ad or even what we read. Trust, deception and blind spots are just a few of the tools we must incorporate into a new toolkit called the cognitive risk framework. 

There is little doubt that Mueller’s investigation into the sources and methods used by the Russians to influence the 2016 election will reveal more surprises but one thing is no longer in doubt…the Russians have a new cognitive weapon that is deniable but still traceable, for now. They are learning from Mueller’s findings and will get better. 

Will we?

[1] http://www.ists.dartmouth.edu/library/301.pdf

[2] https://www.bloomberg.com/news/articles/2018-02-17/mueller-deflates-trump-s-claim-that-russia-meddling-was-a-hoax

[3] https://www.scribd.com/document/371673084/Internet-Research-Agency-Indictment#from_embed

[4] https://www.schneier.com/blog/archives/2013/03/phishing_has_go.html

NIST

by: James Bone Categories: Risk Management Truth Is Fungible in Cyberspace

“In 1981, Carl Landwehr observed that “Without a precise definition of what security means and how a computer can behave, it is meaningless to ask whether a particular computer system is secure.”[i]

Researchers George Cybenko, Annarita Giani, and Paul Thompson of Dartmouth College introduced the term “Cognitive Hack” in 2002 in an article entitled, “Cognitive Hacking, a Battle for the Mind”. “The manipulation of perception —or cognitive hacking—is outside the domain of classical computer security, which focuses on the technology and network infrastructure.”[i] This is why existing security practice is no longer effective at detecting, preventing or correcting security risks, like cyber attacks.

 Almost 40 years after Landwehr’s warning cognitive hacks have become the most common tactic used by more sophisticated hackers or advanced persistent threats. Cognitive hacks are the least understood and operate below human conscious awareness allowing these attacks to occur in plain sight. To understand the simplicity of these attacks one need look no further than the evening news. The Russian attack on the Presidential election is the best and most obvious example of how effective these attacks are. In fact, there is plenty of evidence that these attacks were refined in elections of emerging countries over many years. 

 A March 16, 2016 article in Bloomberg, “How to Hack an Election” chronicled how these tactics were used in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala, and Venezuela long before they were used in the American elections.

 “Cognitive hacking [Cybenko, Giani, Thompson, 2002] can be either covert, which includes the subtle manipulation of perceptions and the blatant use of misleading information, or overt, which includes defacing or spoofing legitimate norms of communication to influence the user.” The reports of an army of autonomous bots creating “fake news” or, at best, misleading information in social media and popular political websites is a classic signature of a cognitive hack. 

 Cognitive hacks are deceptive and highly effective because of a basic human bias to believe in those things that confirm our own long held beliefs or beliefs held by peer groups whether social, political or collegial. Our perception is “weaponized” without our knowledge or full understanding we are being manipulated. Cognitive hacks are most effective in a networked environment where “fake news” can be picked up in social media sites as trending news or “viral” campaigns encouraging even more readers to be influenced by the attacks without any sign an attack has been orchestrated. In many cases, the viral nature of the news is a manipulation through the use of an army of autonomous bots on various social media sites. 

 At its core the manipulation of behavior has been in use for years in the form of marketing, advertisements, political campaigns and in times of war. In the Great World Wars, patriotic movies were produced to keep public spirits up or influence the induction of volunteers to join the military to fight. ISIS has been extremely effective using cognitive hacks to lure an army of volunteers to their Jihad even in the face of the perils of war. We are more susceptible than we believe which creates our vulnerability to cyber risks and allows the risk to grow unabated in the face of huge investments in security. Our lack of awareness to these threats and the subtlety of the approach make cognitive hacks the most troubling in security.

 I wrote the book, “Cognitive Hack, The New Battleground in Cybersecurity.. the Human Mind”, to raise awareness of these threats. Security professionals must better understand how these attacks work and the new vulnerabilities they create to employees, business partners and organizations alike. But more importantly, these threats are growing in sophistication and vary significantly requiring security professionals to rethink assurance in their existing defensive posture. 

 The sensitivity of the current investigation into political hacks by the House and Senate Intelligence Committees may prevent a full disclosure of the methods and approaches used however recent news accounts leave little doubt to their effect as described more than 14 years ago by researchers and more recently in Paris and Central and South American elections. New security approaches will require a much better understanding of human behavior and collaboration from all stakeholders to minimize the impact of cognitive hacks. 

I proposed a simple set of approaches in my book however security professionals must begin to educate themselves of this new, more pervasive threat and go beyond simple technology solutions to defend their organization against them.  If you are interested in receiving research or other materials about this risk or approaches to address them please feel free to reach out. 

[i] http://www.ists.dartmouth.edu/library/6.pdf

[i] C.E. Landwehr, “Formal Models of Computer Security,” Computing Survey, vol. 13, no. 3, 1981, pp. 247-278.

February 19, 2018 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era: Cognitive Risk Framework

The Emergence of a Cognitive Risk Era

 

 

Traditional risk frameworks, such as COSO ERM (1985), ISO 31000 (2009), and the Basel Capital Accord (1974) are modern inventions from the early 20th century formulated to respond to major failure in managing financial, operational, regulatory, and market risks. Traditional risk frameworks have been helpful in managing compliance risks with an emphasis on internal controls but lack the rigor to evaluate asymmetric risks that cause business failure.

December 9, 2017 by: James Bone Categories: Risk Management Is Cognitive Computing the Next Step to Help Fight Cybercrime?

James Bone, executive director, of TheGRCBlueBook participated in a webinar sponsored by IBM on the future of cybersecurity with two esteemed colleagues,  Research Professor / Founding Director, Dynamic Decision Making Laboratory, Carnegie Mellon University and  Technical Specialist, IBM Security.

In this webinar we will look at cognitive security – the concept of using data mining, machine learning, natural language processing and human-computer interaction to mimic the way the human brain functions and learns – in order to help fight cybercrime.

July 1, 2017 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era: Intentional Control Design and Machine Learning: Creating Situational Awareness

In my previous articles, I introduced Human-Centered risk management and the role that Cognitive Risk Governance should play in designing the risk and control environment outcomes that you want to achieve.  One of the key outcomes was briefly described as situational awareness that includes the tools and ability to recognize and address risks in real time.  In this article, I will delve deeper into how to redesign the organization using cognitive tools while reimagining how risks will be managed in the future.  Before I explore “the how” let’s take a look at what is happening right now.

This concept is not some futuristic state!  On the contrary, this is happening in real-time.  BNY Mellon, one of the oldest firms on Wall Street has started a transformation to a cognitive risk governance environment.  Mellon is not the only Wall Street titan leading this charge.  JP Morgan, BlackRock, and Goldman Sachs are hiring Silicon Valley talent among others to transform banking, in part, to remain competitive and to strategically reduce costs, innovate and build scale not possible with human resources.  The banks have taken a very targeted approach to solve specific areas of opportunity within the firm and are seeking new ways to introduce innovation to customer service, new product development and create efficiencies that will have profound implications for risk, audit, compliance and IT now and in the foreseeable future

As these early stage projects expand the transformation that is taking place today will position these firms with competitive advantages few can anticipate.  I do not know the business plans of BNY Mellon, JP Morgan, BlackRock or Goldman Sachs but it is safe to say that each of these firms will see the benefits of implementing targeted solutions with smart systems to augment decision-making and drive growth.  They may also reduce risks in the process.  However, as these firms grow their smart technology portfolio it will become obvious that a strategic plan must include an overarching Cognitive Risk Governance program that goes deeper than IT efficiencies, investment management and one-off cost savings in contract reviews.  I applaud the approach these firms are taking but these are low-lying “tactical fruit”, but one must start somewhere!

The real question is what role will risk management, audit, and compliance play in this new cognitive risk era?  Will oversight functions continue to be observers of change or leaders in change with a risk framework that contemplates an enterprise approach to smart systems?  Will oversight functions seek opportunity in this new cognitive risk era or choose to ignore the growth of these advances?

The Cognitive Risk Framework for Enterprise Risk Management has been presented in earlier articles as a set of pillars that include human elements integrated with technology because technology alone is not enough!  Smart systems will reduce costs, in some cases, redundant staff and in other cases reduce the need to add people to build scale and more.  However, without a more comprehensive approach the limits of a technology-only strategy will become obvious as soon as the cost savings decline.

If firms truly want to create a multiplier effect of cost savings and scale the transformation must include technology that assists humans to become more productive!

If operational and residual risks represent the bulk of inefficient bottlenecks or have limited a firm’s ability to respond quickly to changes in the business environment a well-designed cognitive risk framework offers firms the ability to free up the back and middle office environment.  How so?

Introduction to Intentional Control Design, Machine Learning & Situational Awareness

First, automation trumps big data analytics!

I know that Big Data, Predictive Analytics, Machine Learning and Artificial Intelligence sound sexy, seems cool and is the future!  But let’s work in the real world for a moment.  Google has made great advances in machine learning but if you actually take the time to read their research literature (since about 1% or less of the pundits do) you will find that the actual use cases have been limited.  The real opportunities involve routine processes with very large pools of data that is well defined.

You can’t teach a machine to be smart with dumb data

If you have unlimited resources or simply want to throw away money then start a Big Data project with unstructured, random data!  Some may argue the benefits of this approach but consider this.  Most firms produce petabytes of structured data every single day in production environments that are rarely leveraged to its full capacity.  Why not start with a good data source, automate the processes that produce this data to assist humans in getting their jobs done more efficiently?  Want to ensure internal controls work flawlessly? Automate them!  Want to ensure compliance with regulatory mandates? Automate it!  Want to produce real-time audit sampling and monitoring? Automate it!

Design the risk, compliance, IT and audit outcomes that you need!  Intentional Control Design takes advantage of machine learning in the most efficient manner through the corpus of data that exists in production data.

Once you do that you have your big data projects solved! Need audit data to test compliance? Done!  Need risk assessments with real data? Done!  Need to check fraudulent activity? Done!

If you want to create situational awareness for how your firm is operating in real time design it!  Automation trumps Big Data analytics, but most get this backwards!

Unstructured data requires human annotation, which increases costs exponentially so why start there?  It may not be sexy but the money that you save will make you feel better than the money you lose chasing the glamor projects that add little value.

Automation gives you situational awareness through true transparency!  Transparency gives the Board and senior management the ability to adjust in a more timely manner.  If you want a no surprise business environment consider designing one…….  It doesn’t happen by accident nor does it happen by threatening staff to not make mistakes!

Cars are safer today than 40 years ago because of design! Airline travel is safer today because of design.  Amazon, Facebook, Google, and Apple have overtaken traditional business models by design!

There are a number of residual benefits that I haven’t discussed in detail yet like reduction in cyber risks, employee burnout, increased staff productivity and many more.  I saved these for last because we always forget that humans are the real engines of business growth.

If you are still an unbeliever just take at look at the store closings in the retail industry by not listening to the change created by the internet and firms like Amazon.  I understand that change is hard but without change it will be harder to keep up and survive in an environment that moves in nanoseconds!

June 3, 2017 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era: The Role of Cognitive Risk Governance

Musings of a Cognitive Risk Manager

In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must reimagine risk management for the 21st century. I purposely did not get into the details right away because it is really important to understand WHY some “Thing” must change before change can really happen. In fact, change is almost impossible without understanding why.

Why put on sunscreen if you didn’t know that skin cancer is caused by too much exposure to ultraviolent rays from the sun? We know that drinking and driving is one of the deadly causes of highway fatalities BUT we still do it! Knowing the risk of some “Thing” doesn’t prevent us from taking the chance anyway. This is why diets are so hard to maintain or habits are so hard to change. We humans do irrational things for reasons that we don’t fully understand. That is precisely WHY we need Cognitive Risk Governance.

Cognitive risk governance is the “Designer” of human-centered risk management! The sunscreen is effective (if you use it properly!) because the formulation of the ingredients were designed to protect our skin from ultraviolent rays. Diets are designed to help us lose weight. Therefore, cognitive risk governance must also design the outcomes that we seek!
This is radically different from any other risk framework. If you take the time to study any framework, 99% of the guidance is focused on the details of the activity you must do first. Do risk assessments, develop internal controls, and create policies and procedures, blah blah blah …. The details are important but what if your focus is on the wrong stuff, which too often is the case! If you have ever heard the term, “Shoot first, then Aim” then you now fully understand why most risk frameworks don’t work.

The fallacy of action is the root cause of failure in risk management programs.

It is really important to understand this concept so let me provide an illustration. If you want to create a car with fuel efficiency you must first design the car to get more mileage with the same amount of fuel.

In order to achieve better efficiency you must understand why cars are not fuel efficient. In order to fully understand why cars are not fuel efficient manufacturers must reimagine the car.

However, before you start changing the car you must decide how efficient you want the car to become.

Design starts with imaging the end state then determining what steps to take to achieve the goal. This is how cognitive risk governance works in human-centered risk management.

The role of cognitive risk governance is to design new ways to reduce risks across the organization. In order to reduce risks we must understand why certain risks exist and determine the right reduction in risk we want to achieve. This is why cognitive risk governance is a radical departure from traditional risk management.

In contrast, traditional risk management advocates for a Top Ten list of risks or a Risk Repository that inventories events. Unfortunately, the goal seems to be focused on monitoring risks as opposed to risk reduction. Risks cannot be completely eliminated therefore any “activity-focused” risk program will always find new risks to add to the list. A human-centered risk management program is focused on reducing risks to acceptable levels through design. But not all risks! The focus is on complex risks!

Cognitive risk governance is the process of designing human-centered risk management to address the most complex risks. Any distribution of risk data will tell you that 75-80% of risks are high frequency – low impact risks yet traditional risk programs focus 90% of its energy dealing with the least important risks. The opportunity presented by a cogrisk governance model is to separate risks into appropriate levels of importance. Risks represent a range (distribution) of outcomes therefore one-dimensional approaches to address risks will inevitably not address the full range of complex risks.

Developing a Cognitive Risk Governance Tool Kit

The toolkit for designing cognitive risk governance involves an understanding of a few concepts that any organization can implement.

Cognitive risk governance starts with a clear understanding of the difference in “Uncertainty” and “Risks”. Uncertainty is simply what you do not know or don’t have clear insight into understanding the impacts of its occurrence. Risks are known but it doesn’t mean you fully understand the nature of these risks. I do not subscribe to the semantic exercise of Known-Known, Known-Unknowns, and Unknown-Unknowns. There is no rigor in this exercise nor does in provide new insights into solving problems of importance.

The next concept in a cogrisk governance program involves developing risk intelligence and active defense. Risk intelligence is quantitative and qualitative data from which analysts are better able to develop insights into complex risks. The processes of data management, data analysis, and the formulation of risk intelligence may require a multiple disciplinary team of experts depending on the complexity of the organization and its risk profile.

Active defense, on the other hand, is the process of implementing targeted solutions driven by risk intelligence to capture new opportunities and reduce risk exposures that impede growth. Risk Intelligence and active defense will require solutions and new tools that may not be in use in traditional risk programs. Organizations are generating petabytes of data that are seldom leveraged strategically to manage risk. A cogrisk governance program is responsible for designing risk intelligence and active defense in ways that leverage these stores of data as well as external sources of intelligence.

In traditional risk, the “Three Lines of Defense” model is a common approach used to defend the organization, yet to understand why some change is needed one need only look at how the military is re-engineering its workforce to a 21st century model to address the new battleground being fought with technology and cognition. It is no longer a reasonable assumption to expect an army of people with limited tools to be able to analyze the movement of petabytes of data into, across and outside of an organization with confidence.

The transformation in the military is being led by the Joint Chiefs of Command which is a corollary for Risk, Compliance, Audit, and IT professionals. Risk professionals must lead the change from 19th century risk practice to 21st century human-centered risk management. Existing risk frameworks such as COSO, ISO and Basel have laid a good foundation from which to build but more needs to be done.

I will address these opportunities in more detail in subsequent articles but for now let’s move to the next concept in a CogRisk governance model. The intersection of human-machine interactions has been identified as a critical vulnerability in cyber security. However, poorly designed workstations that require employees to cobble together disparate data and systems to complete work tasks represent inefficiencies that create unanticipated risks in the form of human error.

The intersection of the human-machine interaction represents two significant opportunities in a human-centered risk management program. The first opportunity is an improvement in cybersecurity vulnerability and the second is the capture of more efficient processes in productivity gains and reductions in high frequency, low impact risks. I will defer a discussion on the opportunity to improve cybersecurity to subsequent articles because of the scope of the discussion. However, I do want to mention that a focus on reducing human error risks is unappreciated.

The equation is a simple one but very few organizations ever take the time to calculate the cost of inefficiency even in firms with advanced Six Sigma programs. Here is an oversimplified model: Human error (75%) + Uncontrollable risks (25%) = operational inefficiency (100%). From here it is easy to see the benefit of human-centered risk management. This is obviously a simplified model, including the statistical data, but not one far from reality if you look at empirical cross-industry analysis.

Human-centered risk management focuses on redesigning the causes of human error providing real payback in efficiency and business objectives. A risk program designed to facilitate safe and efficient interactions with technology improves risk management and helps grow business. More on that topic later!

In the next article, I will discuss Intentional Control design and practical use cases for machine learning and artificial intelligence in risk management.

As I have done in previous articles, I invite others to become active participants in helping design a human-centered risk management program and contribute to this effort. If you are a risk professional, auditor, compliance officer, technology vendor or simply an interested party, I hope that you see the benefit of these writings and contribute if you have real-life examples.

James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.

 

May 21, 2017 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era: Human-Centered Risk Management

Musings of a Cognitive Risk Manager

Before beginning a discussion on human-centered risk it is important to provide context for why we must consider new ways of thinking about risk. The context is important because the change impacting risk management has happened so rapidly we have hardly noticed. If you are under the age of 25 you take for granted the Internet, as we know it today, and the ubiquitous utility of the World Wide Web. Dial-up modems were the norm and desktop computers with “Windows” were rare except in large companies. Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.

What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks. Is it possible that risks and the processes for measuring risk should remain static? Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future? Why are qualitative self-assessments still a common approach for measuring disparate risks? More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?

The constant is human behavior!

Technology has been successful at helping us get more done when and wherever we need to conduct business. The change brought on by innovation has nearly eliminated the separation of our work and personal lives, as a result, businesses and individuals are now exposed to new risks that are harder to understand and measure. The semi-state of hardened enterprise but soft middle has created a paradox in risk management. The paradox of Robust Yet Fragile. Organizations enjoy robust technological capability to network, partner and conduct business 24/7 yet we are more vulnerable or fragile to massive systemic risks. Why are we more fragile?

The Internet is the prototypical example of a complex system that is “scale-free” with a hub-like core structure that makes it robust to random loss of individual nodes yet fragile to targeted attacks on highly connected nodes or hubs. Likewise, large and small corporations are beginning to look more like diverse forms of complex systems with increased dependency on the Internet as a service model and a distributed network of vendors who provide a variety of services no longer deemed critical or cost effective to perform in house.

Collectively, organizations have leveraged complex systems to respond to customer and stakeholder demands to create value, unwittingly, becoming more exposed to fragility at critical junctures. Systemic fragility has been tested during recent denial of service attacks (DDoS) on critical Internet service providers and recent ransomware attacks both which spread with alarming speed. What changed? After each event risk, professionals breathe a sigh of relief and continue pursuing the same strategies that leave organizations vulnerable to massive failure. The Great Recession of 2009 is yet another example of the fragility of complex systems and a tepid response to systemic risks. Do we mistakenly take survival as a sign of a cure to the symptoms of systemic illness?

After more than 20 years of explosive productivity growth the layering of networked systems now pose some of the greatest risks to future growth and security. Inexplicably, productivity has stalled because humans are becoming the bottleneck in infrastructure. Billions of dollars are currently rushing in to finance the next phase of Internet of Things that will extend our vulnerabilities to devices in our homes, our cars, and eventually more. Is it really possible to fully understand these risks with 19th century risk management?

The dawn of the digital economy has resulted in the democratization of content and the disintermediation of past business models in ways unimaginable 20 years ago. I will spare you the boring science behind the limits of human cognition but let’s just say that if you can’t remember what you had for dinner last Wednesday night you are not alone.

But is that enough reason to change your approach to risk management? Not surprisingly, the answer is Yes! Acknowledging that risk managers need better tools to measure more complex and emerging risks should no longer be considered a weakness. It also means that expecting employees to follow, without fail or assistance, the growing complexity of policies, procedures and IT controls required to deal with a myriad of risks may be unrealistic without better tools. 21st century risk management approaches are needed to respond to the new environment in which we now live.

Over the last 30 years, risk management programs have been built “in response” to risk failures in systems, processes and human error. Human-centered risk management starts with the human and redesigns internal controls to optimize the objectives of the organization while reducing risks. This may sound like a subtle difference but it is, in fact, a radically different approach but not a new one.

Human-factors engineers first met in 1955 in Southern California but [its] contributions to safety across diverse industries is now under-appreciated. We don’t give a second thought to the technology that protects us when we travel in our cars, trucks and airlines or undergo complex medical procedures. These advances in risk management did not happen by accident they were designed into the products and services we enjoy today!

Each of these industries recognized that human error posed the greatest risks to the objectives of their respective organizations. Instead of blaming humans however they sought ways to reduce the complexity that leads to human error and found innovative ways to grow their markets while reducing risks. Imagine designing internal controls that are as intuitive as using a cell phone allowing employees to focus on the job at hand instead of being distracted by multitasking! A human-centered risk program looks at the human-machine interaction to understand how the work environment contributes to risk.

I will return to this concept in subsequent papers to explain how the human-machine interaction contributes to risk. For now, let’s suffice it to say that there is sufficient research and empirical data to support the argument. To further explain a human-centered risk approach we must also understand how decision-making is impacted as a result of 19th century risk practices.

Situational awareness is a critical component of human-centered risk management. One’s perception of events and comprehension of their meaning, the projection of their status after events have changed or new data is introduced, and the ability to predict how change impacts outcomes and expectations with clarity facilitate situational awareness. The opportunity in risk management is to improve situational awareness across the enterprise. Enterprise risks are important but they are not all equal and should not be treated the same. Situational awareness helps senior executives understand the difference.

The challenge in most organizations is that situational awareness is assumed as a byproduct of experience and training and seldom revisited when the work environment changes to absorb new products, processes or technology. The failure to understand this vulnerability in risk perception happens at all levels of the organization from the boardroom down to front-line. The vast majority of change introduced in organizations tend to be minor in nature but accumulate over time contributing to a lack of transparency or Inattentional Blindness impacting situational awareness.   This is one of the many reasons organizations are surprised by unanticipated events. We simply cannot see it coming!

Human-centered risk management focuses on designing situational awareness into the work environment from the boardroom down to the shop floor. This multidisciplinary approach requires a new set of tools and cognitive techniques to understand when imperfect information could lead to errors in judgment and decision-making. The principles and processes for designing situational awareness will be discussed in subsequent articles. The goal of human-centered risk management is to design scalable approaches to improve situational awareness across the enterprise.

Human-factors design and situational awareness meet at the “cross roads of technology and the liberal arts” to quote the visionary Steven Jobs. These two factors in human-centered risk management can be achieved by selecting targeted approaches. These approaches will be discussed in more detail in subsequent articles however I invite others to participate in this discussion if you too have an interest in reimagining new approaches to risk management.

James Bone is author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, lecturer on Enterprise Risk Management at Columbia’s School of Professional Studies in New York City and president of Global Compliance Associates, a risk advisory services firm and creator of the Cognitive Risk Management Framework.

May 17, 2017 by: James Bone Categories: Risk Management The Emergence of a Cognitive Risk Era

Musings of a Cognitive Risk Manager

Traditional risk managers have conducted business the same way for most of the last 30 years even as technology has advanced beyond the ability to keep pace. Through each financial crisis risk management has been presented with many opportunities to change but instead resort to the same approach and inevitable outcomes. As competitive pressures grow boards expect executives do more with less pushing risk professionals to adopt creative new ways to add value.

Risks are more complex and systemic in a digital economy with the potential to amplify across disparate vectors critical to business performance. Social media is just one of the many new amplifiers of risks that must be incorporated into enterprise risk programs. Asymmetric risks, like Cyber risk, require a three-dimensional response that includes a deeper understanding of the complexity of the threat and simplicity of execution. The challenge of these more complex risks is even more daunting given the speed of business and distributed nature of data in an interconnected digital economy.

The WannaCrypt cyber attack is just another example of how human behavior has become the key amplifier of risks in a digital economy and an example of how situational awareness is part of the solution. There are many stories and opinions about the events and circumstances of the attack and more details will emerge over time. The truth is that the world got lucky because of the astute actions of one person whose quick actions unintentionally stopped the spread of the virus before broad damage could be done. No one should breathe a sigh of relief because now the attackers are aware of the mistake they made and will, no doubt, correct and learn new ways to exploit weaknesses more effectively. The real question is what did we learn?

The answer is it’s not clear, yet! What is clear is that cyber threats will continue to find ways to exploit the human element requiring new approaches to understand the risk and find new solutions. But I digress….

The purpose of these musings is to introduce the emergence of a cognitive era in risk and propose a path for adopting a human-centered strategy for addressing asymmetric complexity in enterprise risk. The themes I will present in a series of articles will be used to build a case for a supplemental approach in risk that incorporates an understanding of vulnerabilities at the human-machine interaction, human-factor design in internal controls; and, introduce new technologies to enhance performance in managing and reducing human judgment error for complex risks.

Technology has evolved from a tool designed to free up humans from manual work to the development of information networks creating knowledge workers from the boardrooms of Wall Street to the factory floor. The excess capital created by technology is now being reinvested in next generation tools for more advanced uses.

Innovations in machine learning, artificial intelligence and other smart technologies promise even greater opportunity for personal convenience and wealth creation. Risk professionals must begin to understand the methods used in these cognitive support tools in order to evaluate which ones work best to address complex risks. The emergence of smart technology in business applications is growing rapidly however the range of capability and outcomes vary widely for many solutions therefore an understanding of the limitations of each vendor’s predictive powers are important. Contrarily, the rapid advancement of technological innovation has also created a level of complexity that is contributing to the spread of risks in ways that are hard to imagine. It now appears that we are not connecting the dots between the inflection point of technology and human behavior. This is a complex discussion that requires a series of articles to fully unpack.

Risk professionals must begin to understand how human behavior contributes to risk as well as the vulnerabilities at the human – machine interaction. Human error is increasingly cited as the leading cause of risk events in cross industry data such as IT risk, healthcare, automotive, aeronautics and others. [i][ii][iii][iv][v] Unfortunately, risk strategies incorporating human-factors have been widely underrepresented in many risk programs to date. That may be changing! At the core of this change is one constant – humans! Risk professionals who combine “human factors” design with advanced analytical approaches and behavioral risk controls will be better positioned to bring real value to business strategy.

 

 

[i] https://media.scmagazine.com/documents/82/ibm_cyber_security_intelligenc_20450.pdf

[ii] https://www.nap.edu/read/9728/chapter/4

[iii] http://www.hse.gov.uk/humanfactors/topics/03humansrisk.pdf

[iv] http://www.cbsnews.com/news/medical-errors-now-3rd-leading-cause-of-death-in-u-s-study-suggests/

[v] https://www.hq.nasa.gov/office/codeq/rm/docs/hra.pdf