Monthly Archives: July 2014
You must be logged in to view this document. Click here to login
“One cold day in early January 1956, Herbert Simon started his class at Carnegie’s Tech Graduate School of Industrial Administration with a startling announcement: “Over the Christmas holiday, Al Newell and I invented a thinking machine.” That announcement 58 years ago ushered in the “Thinking Machine” and the age of Artificial Intelligence.
Simon’s “thinking machine” called the Logic Theorist was designed on the theorems of Bertrand Russell and Alfred North Whitehead’s Principia Mathematica. Fast forward almost 60 years and you will find a host of applications that boast the use of some form of artificial intelligence or AI. Examples such as, IBM’s chess playing, Watson, to fields as diverse as medicine, smart elevators, and voice mail routers have captured the imagination.
To date, the application of AI has been used to make our lives easier and in some cases has replaced or reduced the need for human workers in jobs where routine chores can be easily handled by a machine. However, intelligence has not been outsourced and the promise of a thinking machine is yet unfulfilled.
In fact, attempts to “build” a functioning replica of a human brain on a computer are in serious jeopardy. The European Commission’s Human Brain project (yes, there is a 10 year, $1.4 billion project underway) has penned an open letter critiquing the failings of the scope of the project. It seems that defining and categorizing the complex functions of the brain into digital form is challenging even for the world’s best neuroscientists!
Interestingly, the concept of artificial intelligence is increasingly implied to Big Data. Artificial intelligence and Big Data are not interchangeable terms. However, a form of artificial intelligence may be used in the execution of a Big Data project. Confused? It is not surprising. If you Google “Big Data” you will get a range of disparate definitions that border on the evangelical. What you will not likely hear are the structural and technical limits of artificial intelligence and its use in Big Data. Let’s save that topic for another day.
There are more than 100+ firms involved in artificial intelligence and/or Big Data projects collectively seeking to make sense of the billions of data stored in electronic and non-electronic warehouses. Firms call this data structured (electronic) and unstructured (electronic, media, and paper-based) data. Firms widely boast about the gains made using numeric data in well-structured formats but little is heard about the failures in unstructured data or attempts to provide large scale predictive modeling to data.
Separating Fact from Fiction
The concept of Big Data is a “catch all” term that refers to a variety of technology solutions positioned to assist with the implementation of data analytics to glean information from stores of data.
Terms often used to describe the potential benefits of Big Data include concepts such as predictive analytics or In-Memory processing. In other examples, marketing terms are used to describe the attributes of Big Data, such as “to unlock the business intelligence hidden in globally distributed [data]” or Big Data analytics “is about uncovering hidden correlations, unknown patterns and valuable information to enable a better understanding of the business environment, in effect leading to superior decision making capability.” These definitions and descriptions range from the mildly optimistic to the wildly exaggerated.
So how does one separate truth from fiction in the definitions and solutions offered by Big Data vendors?
Let’s start with the basic math used in statistics to transform raw numbers into useful information. There are two basic classifications of statistical analysis: Descriptive statistics and Inferential statistics. Descriptive statistics is used to “describe” or summarize data in meaningful ways. For example, calculating the mean, median or mode of data is useful if you want to understand the ranking or segregation of data into groups. Descriptive statistics can be used to show past patterns, trends, or changes in the data but cannot be used to predict whether these patterns will continue into the future.
As you can see, descriptive statistics are very helpful in understanding large amounts of data and is used widely in business. Descriptive statistics, however, have limits in that you are simply describing the specific events or losses that have already occurred. The mistake that many users of descriptive statistics make is attempt to use the data to predict future events. Depending on the type of data you have collected risk events can change widely from one point in time to another. This is why some risk professionals are surprised when their descriptive models fail to identify a major risk before it happens.
In order to develop a form of predictive modeling, risk professionals must use the second classification of statistical analysis: Inferential statistics.
Inferential statistical methods require more robust analytics. Users of inferential statistics must first understand what they are attempting to predict. Users should decide if they are attempting to determine a correlation between one or more variables? Its important to understand that correlation does not imply causation!
Alternatively, are you attempting to make an inference (or predictive probability) about the population of a dataset within a certain degree of confidence? These are very simplified assumptions but if not well structured will contribute to bad outcomes.
Quantifying future risk events with a certain degree of confidence requires a large repository of risk data. The larger the quantity and quality of the data; the higher the level of confidence one may be able to assign to its analytical models. Inferential statistics starts with a detailed analysis of the sampling strategy that will be used. Sampling must be sufficient to ensure that your sample is representative of the population you wish to make an inference.
It is also critical that you have an understanding of the distribution patterns of your data. Different industries produce data with a wide variety of distribution patterns. There are a host of statistical techniques used to manage these varied distribution patterns however if you select the wrong ones you will, at best, waste a lot of time and energy, or worse, you will make very poor inferences from the techniques used. The methods of inferential statistics require that you have a clear grasp of the estimation of parameters to measure and you test your statistical hypotheses.
Suffice it to say that the level of expertise and skill to perform inferential statistics is greater than is needed to perform descriptive statistics.
Ok, now let’s recap. Consider for a moment that some of the advanced functions in excel may be considered a form of limited artificial intelligence. One can see how some vendors of Big Data using these classifications of statistics can make their claims. Vendors may package their version of descriptive and inferential statistics in a platform to be used in limited form to create, so called, predictive models.
Now that you have a basic understanding of the limits and scope of these analytical tools you should begin your education to broaden your awareness of what is the appropriate approach for your projects and firm’s data needs.
This is good news for risk professionals!
By now it should be obvious that a great deal of judgment, skill, and expertise is needed to execute a large Big Data project with the capability to become truly predictive. The predictive capability of Big Data may yet be achieved in the future as these vendors and early adopters gain more experience. Learning from the mistakes of others may also be a good strategy if you don’t have the budget or resources to tackle these projects now.
Automating the analytics of a Big Data project may include choosing the type and sophistication of artificial intelligence you need to employ. The skill set of Big Data and Artificial Intelligence vendors is emerging. Selecting a vendor or set of vendors begins with educating yourself about the projects these firms have completed, understanding the complexity and the similarity of their skills to address your problem and developing a clear roadmap is essential.
Artificial intelligence and Big Data are simply a new set of tools to deepen your knowledge of the business and your risk management program. Currently, these projects are focused on ROI initiatives by uncovering new opportunities to sell more products and services or to find ways to cut costs. Risk managers should look for opportunities to incorporate “smart” uses of these tools to reduce risk and improve operational capabilities whenever possible.
Technology will change how risk is managed! We sit at the crossroads of the early stage of this transformation. AI and Big Data projects should not be considered simply tactical projects but strategic building blocks to protect, manage and leverage data as a critical resource with potentially untapped benefits for the organization. Risk professionals should have a role at the table in shaping this vision to ensure that the full potential of these initiatives is realized.
Even if you are not a Futbol fan, or soccer fan as we know it in the U.S., you no doubt paid attention to the progress of the US team’s successes in the World Cup in Brazil. The excitement of play and the exacting analysis of TV commentators is interesting to watch but hard to follow in part because of the complex scoring system used in the FIFA World Cup standings.
In an attempt to better understand how the World Cup scoring system worked I went right to the source, FIFA.com.
Here is what I found: First of all, let me say that the scoring system and World Rankings of teams who compete in the FIFA World Cup is stunningly complex. Here is the formula used to calculate points for the FIFA World Ranking:
P = M x I x T x C x 100.
M. Points for a victory (3 pts. – Win; 1 pt. – Draw; 0 pts. – loss)
I. Importance of a match (Friendly – 1.0 pt.; World Cup qualifier – 2.5 pts.; Continental final or FIFA Confederation Cup competition – 3.0 pts.; and, World Cup final – 4.0 pts.)
T. Strength of opposition [200 – ranking position of opposition / 100]
Only the top 149 teams are assigned a value of 2.00; All other teams receive a minimum weighting 0.50
C. The strength of a confederation [There are six separate confederations which are each given a weight from 1.00 – 0.85 after each FIFA World Cup event]
Based on the complexity of the scoring system one would assume that the brackets in the World Cup would be determined by which teams ranked highest. One would be wrong! The ranking system appears to simply determine the 32 qualifying teams who will compete in the World Cup.
A Final Draw is conducted of the 32 teams to decide which team is placed into one of 4 groups which must then be rebalanced after the draw to sort out the correct number of teams placed in each group of play. Once the competition begins an even more confusing system is used to determine who advances in the World Cup.
Here is how it works: The two teams with the most points in each group make it to the Round of 16. If teams are level on points, the first tiebreaker is goal differential. The next tiebreaker is goals scored. If that number is the same, then the result of the head-to-head match is determinative. If the head-to-head game ended in a draw, then finally, lots are drawn.
Got it so far?
How could an archaic and complex system like this have anything to do with risk management? Well, if your risk assessment program resembles this scoring system you know you have a real problem.
It is no wonder that at least one of the groupings earned the moniker, “The Group of Death”. This is when one group is selected with an unusually heavy weight of top competitors. The US team found itself in the Group of Death and almost escaped defying the odds.
So what are the lessons for risk managers? First of all, complex or elaborate risk scoring systems do not result in better outcomes. If you can’t easily explain how you assess risks to senior management you may have created a “FIFA”. Complexity does not ensure accuracy and in many cases may hide the weaknesses inherent in your risk assessment program.
Next, complex risk systems may unintentionally predetermine outcomes because of a bias the designers used in determining what should rise to the top. I am not suggesting that FIFA has rigged the outcome of World Cup events; others will judge the fairness of the system for themselves.
What I am saying is that over-engineering a process tends to incorporate a bias or the inherent biases of designers into the ultimate outcome(s) whether they are aware of it or not. When designing a process to assess how results develop over time the program design should err toward capturing randomness as opposed to assumed outcomes based on past experience or fairness.
Don’t create your own version of a “Group of Death” simply because you know these risks exist. FIFA-proof your risk program to gain credibility with senior management and ensure that you haven’t predetermined the risk outcomes in your program.
Futbol may never be as popular as American football or baseball in the US but you have to admit that some of the matches were exciting to watch, especially the drama of the US team or your other favorites in the World Cup! Gooooooooaaaaaaaallllllllll!
James Bone is a Behavioral Risk Consultant with more than 20 years of experience in senior risk management roles across a variety of complex industries. Follow James at TheGRCBlueBook.com
PRLog (Press Release) – Jul. 3, 2014 – LINCOLN, R.I. — TheGRCBlueBook, the internet’s largest online directory of GRC vendors for highly regulated industries, provides a “No-Hype” solution for the purchase of Risk, Audit, Compliance and IT related software platforms. “Purchasing an effective GRC platform has now become a business imperative for any risk professional working in a highly regulated business environment” states James Bone, Executive Director of TheGRCBlueBook.
However, buyer’s remorse from purchasing the wrong GRC software can be very costly and result in putting a risk program behind the curve of increasingly complex regulations. TheGRCBlueBook has created a solution designed to address the biggest risk of purchase, “buyer’s remorse”.
TheGRCBlueBook allows its members to post reviews and rate GRC vendor applications right on the website. Members are allowed to post reviews so that others benefit from their experience. Learning from others is the easiest way to reduce “buyer’s remorse” and helps members understand why a product worked in one industry but not so well in another.
Customer reviews have been used effectively by consumers to purchase the services of home repair providers, electronic products, cars, travel & entertainment and other services. TheGRCBlueBook believes that its informed and professional member network can provide high quality, trustworthy reviews absent the hype and marketing from pundits who are paid to promote a GRC solution regardless of how well it may work in your industry.
The GRCBlueBook was designed by a risk professional for risk professionals and now boasts membership from some of the top firms in risk, audit, compliance, and IT from 33 countries around the world! Membership in TheGRCBlueBook is free so join today!
Users are allowed to post anonymous or fully disclosed reviews and none will be posted until each review and has been quality checked and verified for completeness, accuracy, and relevance. TheGRCBlueBook’s vast database has opened up the global marketplace for GRC vendor solutions in a one stop portal so that members can find the tools they need to solve their specific risk problems.
For more information: email firstname.lastname@example.org or visit https://thegrcbluebook.com or call 866-503-2931.