Our year in stories: 2018

internal audit purpose

We’re grateful to be a part of the world’s journey towards AI. Far more than an academic abstraction, 2018 was a leap forward in the practicalities of AI and machine learning across many different applications, with our own vision for the transformation of audit and financial analysis gaining momentum across the globe.

Here are some of the best and brightest spots of our year together in AI.

“ must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.” – Montreal Declaration for a Responsible Development of AI

The social and ethical challenges of AI are just beginning to be realized, and the recent signing of the Montreal Declaration for a Responsible Development of AI is a big step forward in providing the framework for responsible technology development. As the first private sector signatory to the Declaration, we reinforced our commitment to responsible, human-centric AI systems.

Through a passion for enabling technology, Samantha Bowling, CPA, CGMA, was named a 2018 Innovative Practitioner by CPA.com. As the first to successfully use AI in auditing for small businesses, non-profits, and local government, Samantha’s firm, Garbelman Winslow, leads the pack in improving processes and reducing the risk of material misstatements.

“We need to figure how to free up more data so that AI can thrive.” – Leon Katsnelson, Director & CTO, Strategic Partnerships and Data Science Ecosystem, IBM, speaking at IMPACT AI

The inaugural IMPACT AI conference was held on May 24th, bringing industry thought leaders and technology experts to an audience of over 550 people. In addition to promoting AI education, the goal of the event was to increase and elevate more women in technology. Watch Navdeep Bains, Canadian Minister of Innovation, Science and Economic Development discuss the influence of AI and stay tuned for details on next year’s conference.

Industry reform was a big theme in accounting this year, with scandals for the Big Four and the UK Competition and Markets Authority recommending major shake-ups. Our CEO, Eli Fathi, reminded us how technology can play a critical role in reform.

The first known case of AI helping to investigate a human CPA committing over $2.8M in embezzlement fraud was documented on the ACFE Insights blog.

“AI is transforming the way auditors do business and the exponential pace of change is requiring CPAs to get up to speed quickly.” – Tom Hood, CPA, President & CEO, Maryland Association of CPAs

With dozens of events, webinars, seminars, and forums under our belts in 2018, two notable ones were our AI & the Future of Accounting roadshow, in partnership with the Canadian Trade Commissioner Service, and our expert CPA panel in December. While the roadshow introduced AI to audiences across eight cities, the expert panel delivered practical advice and recommendations tailored directly for auditors. We were also recognized by industry associations and media this year, including being selected as the Top New Product of 2018 by Accounting Today and the Best Machine Learning Solution for Regulatory Compliance by Central Banking.

After a successful pilot with universities across North America, we launched our University Alliance Program in July to educate and train accounting students on the use of AI in auditing. As this year ends, the momentum will continue into 2019 with more than double the amount of institutions on board, over 1300 students completing the program, and a wealth of new curriculum materials and case studies being generated.

Our partnerships with accounting firms around the world exploded, growing our user base to well over 200 organizations. Relationships such as with Garbelman WinslowKNAV P.A., and Kreston Reeves, solidify the value that AI brings to auditing and help us continually improve the MindBridge platform.

For our development team, 2018 was a year of transition as we went from launching the first release of MindBridge Ai Auditor to continuous delivery of major new features for users. February saw new functionality such as Natural Language Processing (NLP) and accounts payable launched at a marquee event in partnership with the Canadian Trade Commissioner Service at Canada House in London, UK, while the rest of the year saw delivery of discrete pieces of value for users, such as interim audit reviews, the data ingestion wizard, and the amazing Filter Builder used by auditors to create their own AI-enabled tests and logic.

What will 2019 bring? We firmly believe that AI is still in its revolution stage for many, bringing aboard new players all the time, while others continue to work with AI-based audits every day. We’ll continue to share and educate along the way, and hope that you’ll let us know how we’re doing.

Why we signed the Montreal Declaration for a Responsible Development of Artificial Intelligence

auditing sample

With artificial intelligence (AI) influencing every aspect of our lives, and its continued growth in research and commercialization opportunities, the question isn’t whether we should develop it responsibly, it’s a question of how.Last year, over 400 participants came together at the Forum on the Socially Responsible Development of Artificial Intelligence to discuss themes of cybersecurity, legal liability, moral psychology, jobs, and other areas to begin the conversation around the impact of artificial intelligence systems (AIS) on humans. Given that it’s now possible to create autonomous systems capable of performing tasks that were once the sole domain of human intelligence and have strong influence on data-driven decisions, it’s imperative to consider the potential effects of AI on ethical and social concerns. How will AI impact security and privacy? What is the impact on social equality and cultural diversity? Will AI disrupt careers and upend the job market?

These are tough questions and the result of the 2017 forum was a draft declaration setting out a framework of ethical guidelines for the development of AI. After a months-long consultation process with the public, experts, and government decision makers, the final Montreal Declaration for a Responsible Development of Artificial Intelligence was signed on December 4, 2018 at the Society for Arts and Technology.

As of today, we are the first private sector signatory to the Declaration, reinforcing our commitment towards an ethical framework for AIS technology development. The Declaration has three main objectives:

  1. Develop an ethical framework for the development and deployment of AI
  2. Guide the digital transition so everyone benefits from this technological revolution
  3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development

How the Montreal Declaration applies to us

As MindBridge is building an AI platform to help people analyze and understand vast amounts of their data in ways never thought of before, it’s critical to follow a development philosophy that keeps our users at the center of the loop. Because we’re building it for you.

We firmly believe that AI is not meant to replace humans, rather its greatest benefit is to empower people to make better decisions for themselves and society without imposing constraints based on any specific beliefs. As the Declaration’s “Respect for autonomy” principle guides:

AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.

Another principle is democratic participation, where “AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.” Our human-centric approach to the MindBridge platform embodies this philosophy within every aspect of the system. Our CTO, Robin Grosset, explains the details and provides concrete examples in his recent blog.

We embraced these and other principles long before the Declaration was signed, so it required little thought to join and become the first private company to get on board. Now that it’s official, we look forward to working with industry, government, and other parties to ensure a responsibly-developed AI future for all of us.

Our approach to human-centric artificial intelligence

internal auditing

Where is the AI?

Artificial intelligence (AI) is all around us, it powers the helpful voice on my phone and it’s in the digital assistant on my kitchen counter. Actually, I have to admit liking to say “Alexa turn on Christmas” to turn my Christmas lights on and off. It’s just a simple end-point computer, like a terminal, communicating with a cloud-based service which does all the hard work of interpreting what I say and figuring out what to do.

Many AI systems are not as obvious as Alexa, they surround us, yet we don’t see them. Take the ads on my Facebook feed, for example, an algorithm is figuring out what it knows about me and then what ads will likely work best. Even with Google, what appears to be just a search box is much smarter. If you ask the question “What is the population of Canada,” Google is not just searching documents using its famous PageRank algorithm, it’s doing much more. It’s figuring out that an infographic is the best way to communicate the population of Canada to me and showing this alongside its other insights. It also knows flight numbers and does different things depending on context.

What we think is a simple search is much more. AI is sometimes quite subtle and helping us in ways we may not realize.

Good experience design often makes our little AI helpers invisible to us. Two of the ten Dieter Rams principles of good design are, “Good design is unobtrusive” and “Good design is as little design as possible.” We can see why subtle or invisible AI happens; it is considered good design.

 

Does MindBridge hide its AI?

We have a philosophy that when our AI provides insight or direction to users, we give them the feedback they need to both see it and understand it. We believe in human-centric AI, which means the human is the central part of the system and they should be able to understand what the AI is telling them and have explanations at each stage. The AI needs to communicate and therefore, being visible is an essential element in the trust relationship we are endeavoring to create.

Having said that, sometimes we can’t help ourselves and occasionally we make the experience seamless and require users to click on little information tabs to find out more. This is a design principle called ‘progressive disclosure’ and allows a user to select the level of detail they want.

So where is our AI? How do you know it’s there and working? Let’s take three examples from our AI Auditor product and walk through the techniques and the design considerations.

 

#1 Unobtrusive but verifiable

Auditors often have to classify items in audit tools manually. They may need to say what kind of money is held in a certain type of account, whether it’s a cash asset, a liability, or maybe a non-capital expense. This process of instructing a software tool in what something means is laborious and repetitive. I think it’s fair to say nobody wants to do it but it’s required to get an accurate view of the finances. This is a great candidate for automation with AI.

MindBridge has a built-in account classifier that uses the human-readable label on financial accounts to determine what kind of account it actually is. This is a form of language processing and we use two methods, the first is a simple search which works well for well-labelled accounts, the second is a Neural Network Classifier which learns how people classify accounts. The net effect (excuse the pun ☺) is that most users of MindBridge spend little to no time telling our system what an account is. It just knows. We do recommend, however, that users review its findings to confirm or correct them. Our AI also learns from these interactions.

This is what it looks like as its working: It appears to be loading data, pretty unobtrusive and just doing its thing.This is what it looks like when the user verifies the outcome. The user has the option to change the classification of the account. This is the only real clue that something smart has just happened.You could be forgiven for not noticing that a lot of work is happening but there are some real time savings here. Below are some charts of simple text search methods vs. a hybrid of text search and AI together. On simple and well-labelled accounting structures, the accuracy of a text search is indistinguishable from an AI. But as we get a little more complex, we see big wins. Further, as the complexity grows to involve a massive organization’s accounts, you see that the simple text search accuracy breaks down and doesn’t cope at all. Conversely, the AI method keeps on punching through the problem and gets it done. The time savings at the complex level is huge; we are talking hours, if not days, of human time saved in laborious activities.

#2 Search that tells you what it understands and gives you options

The MindBridge search interface is a little different than what you’re used to, as we want everything to be understandable and explicable even at the level of a search box. Have you ever typed a search into Google and not got the results you wanted? Chances are you ended up not scrolling to page 2, typed in a slightly different question, and got what you wanted by trial and error.

At MindBridge, we value the AI being visible and explaining itself so that our users can figure out what part of the question is driving the view of data. Here we see a search user interface where the user types their query. There is no AI yet.The user hits go! The AI system parses the language and uses natural language processing (NLP) techniques to unpack what is being requested. Our NLP AI understands language in general but also common accounting terminology. It highlights the important terms in the query and filters the transaction list accordingly.Note that the highlights are clickable so that a user can determine other possible paths and verify that the AI has understood the question. It also understands complex semantics like conjunctions, which are combinations of terms such as AND, OR, or NOT logical expressions. This allows more complex questions to be posed and answered.

In this way, MindBridge users can not only search vast amounts of transaction data for specific scenarios, they can do this without writing an SQL query or using similar technical languages. The AI is effectively reading back their query to them to help in the understanding of what’s driving the results and showing other possibilities. This user interface is very artful as it provides both progressive disclosure and explainable AI, all in a search box.

For transparency, MindBridge has filed a patent for methods used in this search interface. We believe in ‘AI for Good’ and human-centric AI and we use patent protection to ensure the freedom to do the work we do.

 

#3 Ensemble AI

Ensemble AI is the main event at MindBridge and it guides much of our work. We consider its primary role to be a focusing function for people and, as we specialize in finding insights and irregularities in financial data, it allows us to do this in a robust and explainable way.

So how does Ensemble AI work?

First, we need to understand that the ensemble is not just one method or algorithm but many. It’s like having a panel of experts with different types of knowledge and asking each of them what they think about a given transaction or element of data. The system then combines all the insights from the individual algorithms together.

For example, AI Auditor includes standard audit checks, so some of these “experts” are following simple audit rules while others follow advanced AI techniques and algorithms. The point of the ensemble model is that they all work together like an orchestra and, as the user is the conductor of the orchestra, they can select what’s important to them and the combination of results from the ensemble is presented in an easy to follow way.

Here’s an example of one of the detailed views of the ensemble at work (click to enlarge). You see all the little rectangles which have the larger red or green highlights, these are the individual AI capabilities in the ensemble.Let’s dig deeper into two of these capabilities.

 

Expert score

One example of an AI method we use is an ‘Expert System.’ This is a classical AI method that draws on the knowledge of real-world accounting practice to identify unusual transactions.

How do we capture real-world knowledge? We work closely with audit professionals and quiz them with surveys and specific questions about risky transactions, allowing us to construct an expert system that knows hundreds of account interactions and their associated concerns. We can run this method very quickly on large amounts of data, allowing us to scale human knowledge and highlight issues that a human user looking at a small sample could easily miss.

Rare flows

Ensemble AI can also identify unusual things using empirical methods. This leverages the science of what is usual or unusual, such as another method we use called ‘Rare flows.’ This part of Ensemble AI is a method of unsupervised learning from a family of algorithms known as outlier detection. The nice thing about unsupervised learning algorithms is they bring no bias, they simply identify what’s in the data and thus let the data speak for itself.

The purpose of this method is to uncover unusual financial activity. We apply this method to all financial activity but the specific PCAOB guidance on material misstatements says:

The auditor also should look to the requirements in paragraphs .66–.67A of AU sec. 316, Consideration of Fraud in a Financial Statement Audit, for …  significant unusual transactions.”

This algorithm finds unusual activity and highlights them and we also perform this type of analysis with several different ensemble techniques. One of the nice things about the ensemble is that you’re not relying on one method, and these techniques can look at account interactions, dollar value amounts, and other outlier metrics to bring them all together.

 

Why human-centric AI is needed in auditing

Most audit standards today, including the international standards, were the result of years of experience in previous cases of accounting irregularities. As such, they are great at identifying the problems of the past. The limitation is that the typical rules-based system approach to finding irregularities can never identify a circumstance that is not anticipated, and this is why we should apply AI methods like those described above.

A future-looking audit practice needs to adapt to new circumstances. Every industry is changing as the result of AI adoption and the idea that we can uncover new and unusual activity, and explain why it is being flagged, is a key strength of AI systems used by forward -looking audit professionals.

This is why we need AI in auditing. In the words of John Bednarek, Executive Director of Sales Operations, Marketing & Strategic Business Development at MindBridge, “Auditors using AI will replace auditors who don’t”. The simple reason for this is auditors who leverage AI will be faster and more complete in their work, providing a better service to their clients.