MindBridge named in Gartner Autonomous Accounting Hype Cycle
Logo of MindBridge

AI for enterprise risk management (webinar recap)


On February 23, 2022, MindBridge’s VP of Strategy and Industry Relations, Danielle Supkis Cheek, CPA, CFE, CVA, hosted a live webinar and Q&A on how to remove barriers to AI for your core ERM framework. Danielle shared some great insights during the webinar (video link shared below), and the attendees were very engaging by participating in polls and asking great questions in the Q&A.

Thank you to everyone that attended the live event, and for anyone that missed it, you can view a recording of the webinar here or keep reading for a recap of some of the most valuable key takeaways.


Financial technology transformation is moving rapidly, making it hard for enterprises and their leadership to adapt their Enterprise Risk Management processes. The impact on informed judgment can be detrimental if risks are not appropriately managed. AI solves this challenge by helping financial professionals augment traditional risk management processes and quickly and more accurately identify anomalies and surface insights to mitigate risk. 

AI’s Place In The COSO ERM Framework

Nuggets of information are difficult to process anytime you have extreme amounts of data, ledgers, or sub-ledgers of other operational datasets. And while most of us have some data analytics programs in-house, it is incredibly challenging to build out complex programs that encompass the basis of outlier detection based on your norms or control points. 

That’s where AI can start fitting in.  

AI enables the ability to aggregate extreme amounts of data that would typically otherwise be highly cumbersome to aggregate and use for decision-useful information. Therefore, instead of going through a theoretical exercise, you’re able to use actual concepts and actual risks that are permeating through your data. 

Current Pressures Creates New Risks

The risk environment is constantly changing. With factors such as staffing shortages, new regulations, data volume issues, and budget pressures, organizations must be aware of how these pressures affect their risk profile.

When you have all those different kinds of changes in pressures, your risk profile also changes very rapidly and in ways that you may not necessarily be aware of. Sure, you probably have good guesses, you probably have really good insights and intel that’s coming in, but the speed at which that changes is tremendous. E.g., One of the most concerning pressures that organizations face is to do more with less. This burden pressures organizations to either skip a couple of steps or bypass a process which could ultimately lead to errors.

Detecting Behavior in Data  

Here at MindBridge, a lot of the work we’re doing related to risk stems from the question of ‘what are the risks created within organizations that are related to humans as part of it?’ What’s essential within the data, and what you see through the data, is behavior, the human behavior. Of course, there are external risks to consider; however, there are also things that may not necessarily be seen inside existing data and can only be discovered by looking within your organization’s environment. 

“When a measure becomes a target, it ceases to be a good measure.”

 – Charles Goodhart

This quote basically says that any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes. 

KPIs create accountability for an organization— hit your metrics. The problem is that many organizations either put hyper-focus on a singular metric or a series of metrics that can be manipulated the same way. 

Ensemble AI

Ensemble AI combines three different types of things (machine learning, statistical methods, and traditional rules), weighs them together, and presents them so that you (the human) may determine what doesn’t look ‘right.’ This trigger is designed to give you plenty of clues and indications as to what you should be paying attention to in your books that could potentially become, or already is, an issue. 

This process allows the analysis to identify the relative risk of unusual patterns by combining a human expert understanding of business processes and financial, monetary flows with outlier detection. 

“An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem”

-John Turkey.

Outlier vs. Anomaly

Many people use the terms outlier and anomaly synonymously. Outliers are distant observations from the mean or location of a distribution. However, they don’t necessarily represent abnormal behavior or behavior generated by a different process. On the other hand, anomalies are data patterns generated by different processes.

Control Points / Tests

 MindBridge control points are designed to compare client data against pre-defined areas of risk, providing visualizations and reports to understand levels of risk (risk scores), identify unusual transactions, and drill-down into the details. With Ensemble AI, these control points work together to provide results that couldn’t be achieved by running each capability separately. 

To give you an idea, MindBridge has one control point that looks at the pairing of transactions. And let’s say the pairing of accounts receivable and revenue is one of your ‘norms.’ If you look and see that you have a transaction that pairs cash to revenue, it would be flagged for review as it is not the standard pairing. Machine learning is needed in that iteration to determine “what is normal” in each uploaded file.

The same concept also applies to vendor analysis. For example, let’s say you pay $30,000 a month rent for a particular landlord, and then all of a sudden, you see a $60,000 amount. That transaction will get flagged as an “unusual” amount for you to review. 

This unusual amount may be justified as rent + deposit or a similar situation. However, if said outlier occurred in the last month of the fiscal year, you may have some other factors to consider. For instance, do you have a massive cut-off issue? 

All those different tests are run simultaneously, in real-time, on 100% of transactions. So instead of going down theoretical exercises of risk, you can start looking at actual concepts of risks and use that to shape your judgment related to what kind of risks and what other areas you need to spend some time on.

Use Case

During the webinar, Danielle presented many strong use cases concerning the utilization of AI in the ERM framework. We don’t want to spoil the entire show for you, so we’ll just cover one of the use cases in this recap.

One of the use cases presented was the DOJ’s effective compliance program.

The DOJ’s effective compliance program is the DOJ’s response to what compliance program they expect you have in place to address the risks of violating the Foreign Corrupt Practices Act, which comes with criminal penalties associated with it. 

Let’s face it; no one wants to go to jail, especially for something done without your knowledge. And due to the differences in international business practices, something that may be a standard practice in a foreign country (e.g., bribes) may be illegal at home. 

Suppose a foreign third party makes bribes without your knowledge and the DOJ sees that you have an effective compliance program. In that case, you may have an affirmative defense to not have criminal liability for an FCPA violation. You may likely have some civil liability for it, but you don’t have people going off to jail. 

The DOJ uses three significant components when evaluating the competence of your compliance programs.

1. Is it a well-designed program?

Here they’re determining if there are procedures in place for risk assessment. For example, is there risk-based training, are there appropriate controls and processes for third-party management, and what is your due diligence process in mergers and acquisitions?

2. Is the program being applied earnestly and in good faith? In other words, is the program adequately resourced and empowered to function effectively?

The DOJ is very interested in what kind of resources you provide to this program and how much funding is being allocated. Unfortunately, the attachment of funding as a factor poses a problem for some organizations because not all money is spent efficiently. So many people have spent a lot of money to build a program that ends up being too narrow scope when a more holistic concept is needed. 

3. Does the corporation’s compliance program work in practice?

For more holistic concepts of risk, the DOJ wants to see the internal audit, control, testing, and the iteration and constant evolution of the programs; but most importantly, does it work? This is very similar to enterprise risk management, where you’re constantly reassessing and fine-tuning and becoming more precise. This process can be challenging if you don’t have an inflow of real data that can be processed in real-time. 

Technology Ethics

In a new technology benchmarking report, the Association of Certified Fraud Examiners said, “The use of AI and Machine Learning in anti-fraud programs is expected to more than DOUBLE over the next two years.” This is scary because some people don’t know how to supervise AI properly. There are a lot of tools out there that will let you custom configure your own AI. The problem is that you don’t know if it’s actually free from bias or if you’re supervising it appropriately. 

IESBA Technology Ethics Project

“The use of technology is a specific circumstance that might create threats to compliance with the fundamental principles. Considerations that are relevant when identifying such threats when a professional accountant relies upon the output from technology include:
  • Whether information about how the technology functions is available to the accountant.
  • Whether the technology is appropriate for the purpose for which it is to be used.
  • Whether the accountant has the professional competence to understand, use and explain the output from the technology.
  • Whether the technology incorporates expertise or judgments of the accountant or the employing organization.
  • Whether the technology was designed or developed by the accountant or employing organization and therefore might create a self-interest or self-review threat.”

Source: https://www.ifac.org/system/files/publications/files/Proposed-Technology-related-Revisions-to-the-Code.pdf

The standards mentioned above are part of the new technology ethics project out of the International ethics group for CPAs. You may think this standard is related to public accountants or your auditors. But actually, this is the proposed standard for CPAs worldwide that are internal to an organization. That means your controllers, your CFOs, your internal audit team, and any CPAs you may have in your organization.

So, it is crucial to be cautious if your organization decides to take the “build your own” or “use a wizard” machine learning route where some people may not necessarily know exactly how the program works. This lack of transparency can create a risk for your organization and the individuals within your organization that carry a CPA license.

Click here to view a complete recording of the webinar.