The Challenges with Sampling and Internal Auditing
We have all heard the story about the cook who has a favorite family recipe that includes cutting the ends off the dough only to later find out it was because the originator of the recipe only had one pan which was too small for the recipe. Cute story for cooking but not for business – especially when it comes to internal audit.
Internal Auditors have relied on sampling because that has been the only tool at hand. It traditionally has been too expensive and time consuming to inspect every item. However now with advanced technologies such as big data and machine learning available, Internal Auditors need to rethink their approach to auditing. When it comes to looking for irregularities (anomalies), sampling is like cutting the ends off the dough; we do it because that’s the way it has always been done not because it is the best method for the job.
Edward Deming is considered the father of sampling in the US and he introduced different sampling techniques to business after WW2. The theory behind sampling assumes that we have a set of items or occurrences, and that by picking a subset of items to test (with the sample size based on the error rate in the population), we can extrapolate the findings from the sample to the larger population without needing to analyze the entire population.
Internal Auditors have a different focus than External Auditors. Internal Auditors are specifically looking to see if internal controls are being followed and if there are any irregularities in the data. “However, irregularities are by nature rare. The likelihood of finding a rare occurrence in a sample is roughly equal to the sample size. Therefore, if we only sample 1% of the transactions, we will only have 1% chance of finding the irregularity.” (Robin Grosset, CTO MindBridge)
In addition, the Internal Auditor needs to worry about the inter-dependencies of the data. As an example, if we were looking to see if a control was being bypassed by splitting a single PO into two smaller POs, we would need to find both split POs in order to determine there might be a problem. Traditional sampling is also ineffective in this scenario because the split POs would only be observed if both were randomly selected as part of the sample. To find items like split POs and other types of irregularities, an auditor needs to look at each and every item, understand the inter-relationships between the items, and analyze the inter-relationships from multiple perspectives (i.e. a “splitting perspective” or a “reversal perspective”.)
The example below highlights a services business that books an order, amortizes the revenue over a twelve-month period and recognizes 1/12 of the revenue in the current period. If an auditor or analyst examined the data from only a quantitative perspective, figure A, it would be very hard to find the error. However, if the data was examined from the perspective of the flow of information between accounts, figure B, the error would be much easier to see.
To date, we in internal auditing have relied on sampling because it was the only tool we had that was financially viable. But sampling is ineffective for finding irregularities. Furthermore, we have implemented an arbitrary cutoff of what we examine, materiality, not because we aren’t interested in finding all anomalies but because, again, examining all the data would be too costly (and time consuming.) We could try to alter how we sample to take into account these limitations, and try taking really large samples and adding data that represents the inter-relationships and multiple perspectives. Or, we could take advantage of modern tools that leverage big data and machine learning and let software do the work for us.
Scott Galin is the Director of Sales at MindBridge. MindBridge helps organization enhance their professional judgement using AI and offers a Cloud based solution that reduces the risk in auditing. For more information, please contact Scott at firstname.lastname@example.org