Ethical AI goes beyond legal AI

internal audit sampling

The recent case of the Statistics Canada project to use personal financial data from banks to study the spending habits of Canadians provides a very clear lesson in the ethics of AI. In this case, Statistics Canada has clear legal authority to request and use this data and it’s very likely that the proposed project conforms with ethical standards for AI and analytics. There is also an excellent case that this project will provide significant public benefit. However, it’s also clear that the project failed to gain a moral license from Canadians and by failing in this regard, they have put the project and perhaps their freedom to operate at risk.

Shining a light on the project

At this point, the details about the project are difficult to come by and I have not seen evidence of any public consultation or public notice of the project. This project came to light through a news story published by Global News on Oct 26, 2018. Based on the news reports and a bias towards the general good intentions of government bureaucracy, we can infer that Statistics Canada finds its current survey-based approach to collecting data on Canadian spending habits deeply inadequate. I also expect that the bureaucrats involved saw the opportunity to provide a more accurate picture of Canadian spending habits, more efficiently, and with less burden on the members of the Canadian public. After consulting with Justice, they also determined that they have the legal authority to do so and they honestly believe that Canadians by and large trust Statistics Canada with their personal data. So they made the decision to use the legislation governing Statistics Canada and request data from the banks. I also expect that bureaucrats knew that this request could be misunderstood by the public so they decided to act out of the public eye, trusting that the banks would comply without fuss. Of course, this project will benefit the banks greatly.

What possibly could go wrong?

Application to analytics and AI

I want to stress that there was no malice in the bureaucratic intentions behind this project. To the contrary, I see the motivations as things we want to encourage: innovation, efficiency, improved quality, and Canadian competitiveness. Where things may have went wrong is a long-standing bureaucratic culture of secrecy. The causes and solutions to this problem with bureaucratic culture is a topic for another day.

No doubt there will be calls for changes to the Statistics Act but I think cries for wholesale changes are misguided. Overall, the Act provides a good example of a legal framework for analytics. I’m not saying that events such as this should be ignored, rather the justice department should be tasked with reviewing the act and regulations with the goal of  improving the legislation — perhaps by making public consultation mandatory when Statistics Canada wants to collect personal data indirectly.

Legislative frameworks for analytics and AI must do a few things well:

  • They must protect privacy
  • They must ensure that the collection and use of personal data contributes to the general social welfare broadly defined
  • They must protect the ability to innovate

On this last point, legislative frameworks must be flexible and protect against egregious misuse while relying on social and market mechanisms to align activity with public expectations. Authority granted by legislation must protect against the right to innovate being blocked by a radical few. By these tests, the Statistics Act stands up well.

Having legal authority to do something is not the same as acting morally or ethically. In general, ethical use of personal data requires that the data subjects explicitly consent to the collection and use of their data. One can assume that the data subjects have given a license to the analytics organization to use their data for the intended purposes but, in practice, this is complicated and there are exceptions to this approach. One such exception is that the use serves the public good. From what I understand of the proposed use of data by Statistics Canada, this test is clearly met.

How we can do better

So what went wrong? The personal data in the possession of the banks was created as part of delivering banking services. The public expectation, perhaps naively, is that that is the only use they have consented to. The attempt by a third party to access and use this data to develop profiles of consumer spending habits goes well beyond their expectations. In this case, the legal authority to do this is irrelevant and disturbing. At the very least, a public education campaign describing why this is important to Canada and Canadians and how each individual will be protected in the process would have gone a long way to easing the public’s concern.

More fulsome consultation and offering individuals with the ability to opt out would likely have eliminated all barriers and created a positive opinion of the project. Each time an organization tries to fly under the radar when accessing large quantities of personal data, they create a risk of public backlash that will saddle the industry with stifling regulation.

The AI industry needs the right to ethically innovate and to do this, we need a regulatory environment that gives latitude to innovate. This requires the public to be confident that industry members will act ethically within the bounds of the legislation. Each time the AI industry goes against these expectations, the right to innovate is put at risk.

 

How accountancy can thrive in the age of AI

big data analytics in auditing

The world is changing at a faster pace than ever, leading chief economist at the Bank of England, Andy Haldane, to state that the disruption caused by the ongoing fourth industrial revolution would be “on a much greater scale” than that experienced during the Victorian industrial revolution. Technology is evolving and infiltrating different industries each day and the era of artificial intelligence (AI) is very much upon us. But do employees risk becoming “technically unemployed” with this rise of technology? Or instead, could accountancy thrive thanks to the rise of AI?

Change is in the air

The adoption of new regulations around mandatory audit firm rotation has stimulated competition in the market and caused real drive for the accountancy industry. The most progressive firms have identified AI capabilities as an important differentiator, but still appreciate that the best practice is a collaborative approach, one that augments human and artificial intelligence.

In the same way that the human brain cannot compute hundreds of thousands of data points in a split second, a machine cannot always understand the and context of real-world accounting. In combination, an accountant fueled by AI is turbo-charged to make faster, more accurate decisions, while having more time to focus on providing guidance, value, and insights.

Enhancing the practice

Although proactive firms are deploying AI to help drive efficiency, reduce risk, and increase quality in their compliance processes, there still remains caution in some parts of the market. Implementing AI to augment and support the practitioners in the accountancy world has shown how this technology can benefit the industry, so why is there still hesitancy? It’s a caution that’s driven by myth, misunderstanding, and misconception regarding the perceived black-box nature of artificial intelligence. Each is an unnecessary barrier to the progress all companies need to make if they’re to compete in the modern marketplace.

Often the adoption of AI tools remains hamstrung by the idea that they cannot integrate with existing technology and are complex to use, and this comes down to a misunderstanding of what’s available. The most effective solutions are affordable and designed to work easily alongside people. They’re designed to demystify AI and make them intuitive to use. Moreover, as regulators take an increasingly tough stance on audit failures, AI solutions are a long-term investment that can reduce risk, increase efficiencies, and improve the quality of financial analysis.

Collaboration, not isolation

In the age of AI, each company must become a technology company in order to defend and grow their market, including the financial industry. It is no longer a question of if the role will change, but how can accountants equip themselves with the necessary skills to thrive in the changing world. It’s time to forge forward and recognize that accountancy actually benefits from the rise of artificial intelligence, unearthing more of the risk in financial data, and providing greater assurances than ever before.

AI is not something for accountancy to fear; it’s something for the industry to embrace in order to enhance auditing practice, increasing accuracy and efficiency.

Click here to find out more about the world’s first and only AI-powered auditor platform.

Answering questions about Ai Auditor

audit analytics examples

As practical applications of artificial intelligence (AI) are new to the finance space, especially with regards to audit, it’s no surprise that the same questions come up across our expert-led webinars. To help you understand how AI is applied to audit, we’ve collected the most common questions and answers here, as provided by our V.P. Growth, John Colthart.

Q: What programming skills or training are needed to use Ai Auditor?

Our goal is to minimize training to make the platform easy to use – a different philosophy from some of the old audit tools you may have used in the past. We designed Ai Auditor to be as user friendly as possible to help you get to maximum value as quick as possible, which means you need no programming or scripting skills to get things done. It’s all drag and drop actions, mapping your data, running the analysis, and viewing results in as easy a manner as possible.

Of course, we do recommend and include training on using the platform itself. Typically, that’s a kickoff with our customer success team to show you around the platform and help you load in that first data set. We give you a few days to play around with the data and reports, then set up a more focused discussion to help you get the most out of the results, such as understanding what control points do and what the machine learning algorithms are hunting for.

Q: Will Ai Auditor replace our existing audit tools or is it in addition to what we use?

The honest answer is that it depends on what you want to accomplish. If you’re just using a working paper solution to gather data to do quick assessments of a trial balance, our platform would absolutely be an addition to what you’re already using. You would use it to go even deeper into the analysis of the data and bring all our reports back into your working papers to have a much higher level of confidence. On the other hand, if you’re using a data analytics tool, especially a visual tool that doesn’t have machine learning built into it, Ai Auditor could potentially be a more effective and easier to use replacement.

We never say it’s one way or the other because every firm we work with has a different view of how technology supports their people and engagements and how they look at things from a line of business perspective, for example M&A, or all the way through to assurance audit and taxation.

At the end of the day, it really depends on the use case but one thing is certain, Ai Auditor is a tool used to help people be more effective at understanding data and gathering evidence, in the capacity that best suits their needs.

Q: Where does all the data that’s being analyzed come from?

We provide a drag and drop interface to load your data and integrate with the most common ERP systems used today, things like CCH Engagement, QuickBooks, Thomson Reuters AdvanceFlow, NetSuite, Sage Intacct, and more, to pull the various types of data we need. For something like accounts payable, for example, we use information from the ledger itself, including the payables register at the end of the period so we can see what’s outstanding and things such as the vendor name and the user hierarchy.

We also eliminate the need to spend time or IT resources on data extraction, manipulation, and ingestion – we take care of all the data heavy lifting so you can focus on the analysis and results.

Q: Does Ai Auditor help with audit planning?

This one is critical to understand: Our platform isn’t just for performing year-end audits, rather it plays an important role throughout the year, including planning. Our interim analysis is always available, going back to whatever period is available from the data, to help you see and understand how the business is transitioning at various points in time.

We support planning in different ways, such as looking at the data to identify and prioritize where you should be spending more time. It could be potential risk in inventories or accounts payable, or really anything that could influence your thinking around how the business is performing. Additionally, we also give you all those control points to show exactly what’s going on in the business and we can help you derive insights from the available data.

We want you to see and drill down into where the risks are at any point in the year, all for the same price as doing a single engagement at the end of the year.

Q: How and where is your data store?

MindBridge Ai cloud services are hosted on a secure cloud infrastructure, with our primary and backup providers fully ISO 27001 and SSAE 16 compliant. Our software stack is designed for defence in depth, deploying redundant controls in the infrastructure, network, platform, and application to ensure no single point of failure.

Q: How secure is the data?

Customer data is always protected, using NIST-approved algorithms (AES 256) and the most secure protocols and implementations available. All network connections are encrypted and all data stores, including primary and backup, are encrypted at all times.

Q: How do you control who has access to what data?

MindBridge Ai has zero access to your client’s data. We maintain SOC 2 compliance and we build in very high security around who can see and perform operations on various types of data, with different levels of hierarchical security. Each Ai Auditor customer has their own dedicated database and storage and there’s no interaction between customers or mixing of data.

At the end of the day, securing your client’s data is paramount and being able to secure that internally – who gets access to what pieces – is also of paramount importance for us.

Q: How do we use the results we get from Ai Auditor and include them as part of our overall processes?

Every report is available in downloadable format, whether it’s images from a screen or some form of data tables. For example, our data can be exported to a Microsoft Excel file and attached as a supporting document to your audit report. In fact, we highly recommend taking all the data we provide and showing them to your client, where it won’t cross independence lines, so they see you as the expert, trusted partner you should be along with the evidence to back it up.

You can also produce reports to share with your end clients including income statements, financial trending analysis, financial analysis, and more.

For more information on Ai Auditor or to book a demo, visit mindbridge.ai.

CPA Firm Taps MindBridge Ai’s technology in Audit as a Competitive Advantage

internal auditing software

An interview with Lisa Zimeskal, CPA, Partner, Hoffman & Brobst, PLLP

According to a survey from the International Federation of Accountants (IFAC), smaller accounting firms are facing significant challenges. Attracting new clients, keeping up with new regulations and standards, and cost pressures versus competitors, were among the top concerns of these firms.

To combat these challenges, Hoffman & Brobst, PLLP, a firm of five partners, decided to embrace artificial intelligence (AI) in their audit services, as a differentiating advantage for their clients, and the firm now use the extensible MindBridge Ai Auditor platform in their audit process.

Ai Auditor is an award winning platform that empowers auditors to detect anomalies in financial data, with speed, efficiency and completeness. The platform leverages expert taught machine learning and AI to ingest and analyze 100% of financial data, as opposed to traditional sampling techniques, to provide higher assurance along with cost savings. Armed with greater insights and boosted efficiency, auditors can focus on what matters most – providing higher value-added services and guidance to their clients.

John Colthart, VP of Growth at MindBridge Ai, recently spoke with Lisa Zimeskal, CPA, Partner, Hoffman & Brobst, PLLP about how AI tools can benefit small firms. Here’s what she had to say.

John Colthart: Tell us about Hoffman & Brobst, PLLP.

Lisa: Hoffman & Brobst, PLLP is a full-service accounting firm in Southwest Minnesota. We provide audit, tax preparation, compilation and review services, in addition to payroll processing and third-party retirement plan administration services.
John Colthart: What do you see as your biggest opportunity?

Lisa: Our biggest opportunity is the continued growth in our industry. We are embracing growth in our firm and we are looking to expand our services when the opportunities arise.

John Colthart: What do you see as the biggest threat or challenge?

Lisa: Our biggest challenge is attracting qualified staff to our practice because of our rural location.

John Colthart: How do you plan to address it?

Lisa: We are currently looking into more options with technology for a remote work environment.

John Colthart: What made you choose MindBridge Ai Auditor? What are the features that you plan to use?

Lisa: We chose MindBridge because we are excited about offering a new value-added service to our clients. This is cutting-edge technology, and it is not something in which others in our area are participating. The entire concept is new to us, but initially, we are planning to leverage the risk-based assessment of transactions. This approach will be enable us to review by-transaction risk in a much more effective and efficient approach than we currently utilize.

The Impact of Artificial Intelligence and Machine Learning on Financial Services and the Wider Economy

business auditor

Recently I was invited to participate as a speaker in the Official Monetary and Financial Institutions Forum (OMFIF) podcast focusing on Artificial Intelligence (AI) and machine learning. OMFIF is an independent think tank for central banking, economic policy, and public investment – a non-lobbying network for best practice in worldwide public-private sector exchanges. This podcast aimed to provide analysis on developments in financial technology, regulation, artificial intelligence and financial inclusion. Below is an excerpt and transcribed version of the podcast.

Interviewer: There is no single definition of artificial intelligence, and it is regularly used as shorthand to talk about everything from chatbots to deep learning. When it comes to financial services, an increasing number of companies across all sectors have been working on creating real-life AI use cases and applications for this range of technologies. The heart of the AI revolution is machine learning algorithms. Software that self-improves it, is fed more and more data. A trend that the financial industry can benefit from immensely. How has AI changed the financial service industry over the past five years and where do you see the greatest application of AI and machine learning algorithms within the financial services sector?

Robin: I think where you see AI being adopted most of all is in places where there are so many big data problems where a normal human can’t cope with the volume and the scale. So, if you take audit as an example, maybe you have a human being looking at transactions to verify if the transactions are good or not and an auditor has to come in and very quickly look at all of the transactions to find out what is going on. One of the coping mechanisms that human beings had for such a situation was doing something called sampling. They take a small set so that they can cope with the volume and verify that those transactions are okay. In that situation, we can train  AI  to look at every transaction and do it in real time as well, and that means that you are not building up a backlog of all transactions to verify. We can codify that human knowledge about what is a valid transaction as well, and we can do that on a vast scale which would not be possible for a human being. So, the biggest disruptive element I think is the ability to codify some degree of human intelligence into these systems and apply them at a vast scale and this is going to cause all kinds of improvements in the quality of activities like auditing. This is applicable everywhere where there’s a lot of data, and there’s a need for us to take some degree of understanding of a problem domain, train an AI system and apply it at scale.

Interviewer: The idea of collaboration is very important between tech and financial services. Robin, as someone who works within an AI company, if you will, what do you see as challenges when it comes to financial institutions adopting the new technology and is there anything that you think can be done to expedite this whole process?

Robin: I see that there are opportunities in building trust in AI and certainly I’ve seen that as one of the big issues that organizations working in the AI field really need to think about. If you think about the types of roles where AI is being used, people like financial accountants, auditors, and even lawyers are being assisted by AI these days and in that environment, quite often they have to justify their actions and what you can’t have is an AI being a black box in that scenario. What you need is an AI system that can explain its workings. I know at MindBridge we spend a lot of time thinking about, as we’re applying algorithms to the areas, how do we explain the findings so that it can support the conclusion. One of the examples is explaining why a transaction is flagged unusual or normal? We took that approach because some of our users can be asked to stand up in a court of law and justify an action that they have taken so they need all of that evidence. So, I think building AI responsibly in a way where they can explain themselves is a very big part of building trust in AI systems.

One of the often-overlooked problems for people working in AI is that they focus on the algorithm and they don’t think about the communication of the outcome. I think that’s one of the big challenges the people who are working in the AI industry need to think about. There is a lot of work going on at the moment in the AI space. Some of the deep learning technology that people are raving about has led a lot of the growth in AI. We need to think about how we take those technologies and turn them into something that the people can understand, and that non-technical people can understand as well. So, I would say that’s one of the biggest barriers to adoption.

Also, smaller firms should be working with the big companies and regulators. A lot of the new technologies are being driven by small, agile innovators and working with regulators or larger organizations helps both sides. From one side the technology matures faster and from the other side you have the awareness of the state-of-the-art, of the possibilities of such technologies are also being conveyed.

To listen to the full podcast by OMFIF, please click the link: https://www.podbean.com/media/share/pb-caaqn-72faa1