The auditor’s fallacy: The law of small numbers

big data analytics in auditing

Humans have used simple statistical sampling for millennia to make generalized sense of the world around us. Living in a resource-constrained world, statisticians gave emperors, surveyors, and accountants a simple workaround to the prohibitively intensive process of counting, checking, and validating everything. Sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of a much larger population.

Random sampling is an old idea, mentioned several times in the Bible with the word “census,” derived from the Latin word censere – “to estimate”. One of the world’s earliest preserved censuses was held in China in 2 AD during the Han Dynasty and appeared later in Ancient Egypt and Greece as a means of tallying or estimating population characteristics and demographics. Historically, the immense benefits of sampling’s simplicity outweighed any cost to accuracy. “Close enough” was good enough.

Fast forward to 2019 and we’re living in a tremendously different world with exploding data volumes and complexity. One domain where this is particularly problematic is the world of audit and assurance, where achieving a passable level of reasonable assurance is increasingly challenging.

For MindBridge Ai, the most obvious place to apply our advanced analytics and breakthroughs in machine learning is the audit world. To help everyone move toward a more wholesome and comprehensive risk analysis, enabling more informed decisions.

Simply, MindBridge Ai Auditor can be thought of as an advanced transaction analysis platform and decision-making tool that amplifies our ability to make sense of the complex and data-saturated world around us. Within our digital world, it’s now possible to pivot from reliance on sampling to algorithmically analyzing everything in a population.

Why is this evolution a good idea?

Why audit sampling doesn’t work

In Daniel Kahneman’s seminal work, “Thinking, Fast and Slow”, the author deals with problems related to “the law of small numbers,” the set of assumptions underlying prevailing statistical sampling techniques.

People have erroneous intuitions about the laws of chance. In particular, they regard a sample randomly drawn from a population as highly representative, that is, similar to the population in all essential characteristics. The prevalence of this belief and its unfortunate consequences for the audit and assurance business are the countless high-profile audit failures. The mounting issues related to outdated standards and problems related to transparency and independence have prompted regulators to go as far as tabling legislation for the break-up of the dominant Big Four firms.

Kahneman makes the point that we’ve known for a long time: The results of large samples deserve more trust than smaller samples. Even people with limited statistical knowledge are intuitively familiar with this law of large numbers but due to human bias, judgmental heuristics and various cognitive filters, we jump to problematic conclusions/interpretations:

  • Humans are not good intuitive statisticians. For an audit professional, sampling variation is not a curiosity, but rather it’s a nuisance and a costly obstacle that turns the undertaking of every audit engagement into a risky gamble.
  • There’s a strong natural bias towards believing that small samples closely resemble the population from which they are drawn. As humans, we are prone to exaggerate the consistency and coherence of what we see. The exaggerated faith of auditors in what can be learned from a few observations is closely related to the halo effect. The sense we often get is that we understand a problem or person or situation when we actually know very little.

This is relevant for auditors because our predisposition for causal thinking exposes us to serious mistakes in evaluating the randomness of a truly random event. This human instinct and associative cognitive machinery seeks simple cause and effect relationships. The widespread misunderstanding of randomness sometimes has significant consequences.

The difficulty we have with statistical irregularities is that they call for a different approach. Instead of focusing on how the event came to be, the statistical view relates to what could have happened instead. Nothing, in particular, caused it to be what it is – chance selected from among its alternatives.

An example shared by Kahneman illustrates the ease with which people see patterns where none exist. During the intensive rocket bombing of London in World War II, it was generally believed that the bombing could not be random because a map of hits revealed conspicuous gaps. Some suspected that German spies were located in the unharmed areas. Careful statistical analysis revealed that the distribution of hits was typical of a random process and typical as well in evoking a strong impression that it was not random. “To the untrained eye,” the author remarks, “randomness appears as regularity or tendency to cluster.” The human psyche is rife with bias and errors in calculation, that have meaningful consequences in our work and lives. Algorithmic and computational tools like MindBridge Ai Auditor stand to improve the human ability to make better and less biased decisions.

Minimizing risk exposure

In Kahneman’s article “Belief in the Law of Small Numbers,” it was explained that intuitions about random sampling appeared to satisfy the law of small numbers, which asserts that the law of large numbers applies to small numbers as well. It also included a strongly-worded recommendation “that professionals regard their statistical intuitions with proper suspicion and replace impression formation by computation wherever possible”. As an example, Kahneman points out that professionals commonly choose samples so small that they expose themselves to a 50% risk of failing to confirm their true hypothesis. A coin toss.

A plausible explanation is that decisions about sample size reflect prevalent intuitive misconceptions of the extent of sampling variation. Technology such as machine learning and pattern recognition are removing this bias to the enormous benefit of practitioners currently at the mercy of mere sampling luck to find what is important.

Thanks to recent advances in cognitive psychology, we can now see that the law of small numbers is part of two larger stories about the workings of the human mind:

  • Exaggerated faith in small numbers is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability. As a result, we end up with a view of the world around us that is simpler and more coherent than the data justifies. Jumping to conclusions is a safer sport in the world of our imaginations than it is in reality.
  • Statistics produce many observations that appear to beg for a causal explanation but do not lend themselves to such an explanation. Many facts of the world are due to chance including accidents of sampling. Causal explanations of chance events are inevitably wrong.

We are at an important crossroads where we must reconsider traditional approaches like audit sampling in the context of the incredible technology that is now available. For companies that are struggling to interact with huge volumes of digital transactions, detect risk, and extract meaningful insights, MindBridge Ai Auditor is an elegant and powerful solution.