Our approach to human-centric artificial intelligence

internal auditing

Where is the AI?

Artificial intelligence (AI) is all around us, it powers the helpful voice on my phone and it’s in the digital assistant on my kitchen counter. Actually, I have to admit liking to say “Alexa turn on Christmas” to turn my Christmas lights on and off. It’s just a simple end-point computer, like a terminal, communicating with a cloud-based service which does all the hard work of interpreting what I say and figuring out what to do.

Many AI systems are not as obvious as Alexa, they surround us, yet we don’t see them. Take the ads on my Facebook feed, for example, an algorithm is figuring out what it knows about me and then what ads will likely work best. Even with Google, what appears to be just a search box is much smarter. If you ask the question “What is the population of Canada,” Google is not just searching documents using its famous PageRank algorithm, it’s doing much more. It’s figuring out that an infographic is the best way to communicate the population of Canada to me and showing this alongside its other insights. It also knows flight numbers and does different things depending on context.

What we think is a simple search is much more. AI is sometimes quite subtle and helping us in ways we may not realize.

Good experience design often makes our little AI helpers invisible to us. Two of the ten Dieter Rams principles of good design are, “Good design is unobtrusive” and “Good design is as little design as possible.” We can see why subtle or invisible AI happens; it is considered good design.

 

Does MindBridge hide its AI?

We have a philosophy that when our AI provides insight or direction to users, we give them the feedback they need to both see it and understand it. We believe in human-centric AI, which means the human is the central part of the system and they should be able to understand what the AI is telling them and have explanations at each stage. The AI needs to communicate and therefore, being visible is an essential element in the trust relationship we are endeavoring to create.

Having said that, sometimes we can’t help ourselves and occasionally we make the experience seamless and require users to click on little information tabs to find out more. This is a design principle called ‘progressive disclosure’ and allows a user to select the level of detail they want.

So where is our AI? How do you know it’s there and working? Let’s take three examples from our AI Auditor product and walk through the techniques and the design considerations.

 

#1 Unobtrusive but verifiable

Auditors often have to classify items in audit tools manually. They may need to say what kind of money is held in a certain type of account, whether it’s a cash asset, a liability, or maybe a non-capital expense. This process of instructing a software tool in what something means is laborious and repetitive. I think it’s fair to say nobody wants to do it but it’s required to get an accurate view of the finances. This is a great candidate for automation with AI.

MindBridge has a built-in account classifier that uses the human-readable label on financial accounts to determine what kind of account it actually is. This is a form of language processing and we use two methods, the first is a simple search which works well for well-labelled accounts, the second is a Neural Network Classifier which learns how people classify accounts. The net effect (excuse the pun ☺) is that most users of MindBridge spend little to no time telling our system what an account is. It just knows. We do recommend, however, that users review its findings to confirm or correct them. Our AI also learns from these interactions.

This is what it looks like as its working: It appears to be loading data, pretty unobtrusive and just doing its thing.This is what it looks like when the user verifies the outcome. The user has the option to change the classification of the account. This is the only real clue that something smart has just happened.You could be forgiven for not noticing that a lot of work is happening but there are some real time savings here. Below are some charts of simple text search methods vs. a hybrid of text search and AI together. On simple and well-labelled accounting structures, the accuracy of a text search is indistinguishable from an AI. But as we get a little more complex, we see big wins. Further, as the complexity grows to involve a massive organization’s accounts, you see that the simple text search accuracy breaks down and doesn’t cope at all. Conversely, the AI method keeps on punching through the problem and gets it done. The time savings at the complex level is huge; we are talking hours, if not days, of human time saved in laborious activities.

#2 Search that tells you what it understands and gives you options

The MindBridge search interface is a little different than what you’re used to, as we want everything to be understandable and explicable even at the level of a search box. Have you ever typed a search into Google and not got the results you wanted? Chances are you ended up not scrolling to page 2, typed in a slightly different question, and got what you wanted by trial and error.

At MindBridge, we value the AI being visible and explaining itself so that our users can figure out what part of the question is driving the view of data. Here we see a search user interface where the user types their query. There is no AI yet.The user hits go! The AI system parses the language and uses natural language processing (NLP) techniques to unpack what is being requested. Our NLP AI understands language in general but also common accounting terminology. It highlights the important terms in the query and filters the transaction list accordingly.Note that the highlights are clickable so that a user can determine other possible paths and verify that the AI has understood the question. It also understands complex semantics like conjunctions, which are combinations of terms such as AND, OR, or NOT logical expressions. This allows more complex questions to be posed and answered.

In this way, MindBridge users can not only search vast amounts of transaction data for specific scenarios, they can do this without writing an SQL query or using similar technical languages. The AI is effectively reading back their query to them to help in the understanding of what’s driving the results and showing other possibilities. This user interface is very artful as it provides both progressive disclosure and explainable AI, all in a search box.

For transparency, MindBridge has filed a patent for methods used in this search interface. We believe in ‘AI for Good’ and human-centric AI and we use patent protection to ensure the freedom to do the work we do.

 

#3 Ensemble AI

Ensemble AI is the main event at MindBridge and it guides much of our work. We consider its primary role to be a focusing function for people and, as we specialize in finding insights and irregularities in financial data, it allows us to do this in a robust and explainable way.

So how does Ensemble AI work?

First, we need to understand that the ensemble is not just one method or algorithm but many. It’s like having a panel of experts with different types of knowledge and asking each of them what they think about a given transaction or element of data. The system then combines all the insights from the individual algorithms together.

For example, AI Auditor includes standard audit checks, so some of these “experts” are following simple audit rules while others follow advanced AI techniques and algorithms. The point of the ensemble model is that they all work together like an orchestra and, as the user is the conductor of the orchestra, they can select what’s important to them and the combination of results from the ensemble is presented in an easy to follow way.

Here’s an example of one of the detailed views of the ensemble at work (click to enlarge). You see all the little rectangles which have the larger red or green highlights, these are the individual AI capabilities in the ensemble.Let’s dig deeper into two of these capabilities.

 

Expert score

One example of an AI method we use is an ‘Expert System.’ This is a classical AI method that draws on the knowledge of real-world accounting practice to identify unusual transactions.

How do we capture real-world knowledge? We work closely with audit professionals and quiz them with surveys and specific questions about risky transactions, allowing us to construct an expert system that knows hundreds of account interactions and their associated concerns. We can run this method very quickly on large amounts of data, allowing us to scale human knowledge and highlight issues that a human user looking at a small sample could easily miss.

Rare flows

Ensemble AI can also identify unusual things using empirical methods. This leverages the science of what is usual or unusual, such as another method we use called ‘Rare flows.’ This part of Ensemble AI is a method of unsupervised learning from a family of algorithms known as outlier detection. The nice thing about unsupervised learning algorithms is they bring no bias, they simply identify what’s in the data and thus let the data speak for itself.

The purpose of this method is to uncover unusual financial activity. We apply this method to all financial activity but the specific PCAOB guidance on material misstatements says:

The auditor also should look to the requirements in paragraphs .66–.67A of AU sec. 316, Consideration of Fraud in a Financial Statement Audit, for …  significant unusual transactions.”

This algorithm finds unusual activity and highlights them and we also perform this type of analysis with several different ensemble techniques. One of the nice things about the ensemble is that you’re not relying on one method, and these techniques can look at account interactions, dollar value amounts, and other outlier metrics to bring them all together.

 

Why human-centric AI is needed in auditing

Most audit standards today, including the international standards, were the result of years of experience in previous cases of accounting irregularities. As such, they are great at identifying the problems of the past. The limitation is that the typical rules-based system approach to finding irregularities can never identify a circumstance that is not anticipated, and this is why we should apply AI methods like those described above.

A future-looking audit practice needs to adapt to new circumstances. Every industry is changing as the result of AI adoption and the idea that we can uncover new and unusual activity, and explain why it is being flagged, is a key strength of AI systems used by forward -looking audit professionals.

This is why we need AI in auditing. In the words of John Bednarek, Executive Director of Sales Operations, Marketing & Strategic Business Development at MindBridge, “Auditors using AI will replace auditors who don’t”. The simple reason for this is auditors who leverage AI will be faster and more complete in their work, providing a better service to their clients.

Ethical AI goes beyond legal AI

internal audit sampling

The recent case of the Statistics Canada project to use personal financial data from banks to study the spending habits of Canadians provides a very clear lesson in the ethics of AI. In this case, Statistics Canada has clear legal authority to request and use this data and it’s very likely that the proposed project conforms with ethical standards for AI and analytics. There is also an excellent case that this project will provide significant public benefit. However, it’s also clear that the project failed to gain a moral license from Canadians and by failing in this regard, they have put the project and perhaps their freedom to operate at risk.

Shining a light on the project

At this point, the details about the project are difficult to come by and I have not seen evidence of any public consultation or public notice of the project. This project came to light through a news story published by Global News on Oct 26, 2018. Based on the news reports and a bias towards the general good intentions of government bureaucracy, we can infer that Statistics Canada finds its current survey-based approach to collecting data on Canadian spending habits deeply inadequate. I also expect that the bureaucrats involved saw the opportunity to provide a more accurate picture of Canadian spending habits, more efficiently, and with less burden on the members of the Canadian public. After consulting with Justice, they also determined that they have the legal authority to do so and they honestly believe that Canadians by and large trust Statistics Canada with their personal data. So they made the decision to use the legislation governing Statistics Canada and request data from the banks. I also expect that bureaucrats knew that this request could be misunderstood by the public so they decided to act out of the public eye, trusting that the banks would comply without fuss. Of course, this project will benefit the banks greatly.

What possibly could go wrong?

Application to analytics and AI

I want to stress that there was no malice in the bureaucratic intentions behind this project. To the contrary, I see the motivations as things we want to encourage: innovation, efficiency, improved quality, and Canadian competitiveness. Where things may have went wrong is a long-standing bureaucratic culture of secrecy. The causes and solutions to this problem with bureaucratic culture is a topic for another day.

No doubt there will be calls for changes to the Statistics Act but I think cries for wholesale changes are misguided. Overall, the Act provides a good example of a legal framework for analytics. I’m not saying that events such as this should be ignored, rather the justice department should be tasked with reviewing the act and regulations with the goal of  improving the legislation — perhaps by making public consultation mandatory when Statistics Canada wants to collect personal data indirectly.

Legislative frameworks for analytics and AI must do a few things well:

  • They must protect privacy
  • They must ensure that the collection and use of personal data contributes to the general social welfare broadly defined
  • They must protect the ability to innovate

On this last point, legislative frameworks must be flexible and protect against egregious misuse while relying on social and market mechanisms to align activity with public expectations. Authority granted by legislation must protect against the right to innovate being blocked by a radical few. By these tests, the Statistics Act stands up well.

Having legal authority to do something is not the same as acting morally or ethically. In general, ethical use of personal data requires that the data subjects explicitly consent to the collection and use of their data. One can assume that the data subjects have given a license to the analytics organization to use their data for the intended purposes but, in practice, this is complicated and there are exceptions to this approach. One such exception is that the use serves the public good. From what I understand of the proposed use of data by Statistics Canada, this test is clearly met.

How we can do better

So what went wrong? The personal data in the possession of the banks was created as part of delivering banking services. The public expectation, perhaps naively, is that that is the only use they have consented to. The attempt by a third party to access and use this data to develop profiles of consumer spending habits goes well beyond their expectations. In this case, the legal authority to do this is irrelevant and disturbing. At the very least, a public education campaign describing why this is important to Canada and Canadians and how each individual will be protected in the process would have gone a long way to easing the public’s concern.

More fulsome consultation and offering individuals with the ability to opt out would likely have eliminated all barriers and created a positive opinion of the project. Each time an organization tries to fly under the radar when accessing large quantities of personal data, they create a risk of public backlash that will saddle the industry with stifling regulation.

The AI industry needs the right to ethically innovate and to do this, we need a regulatory environment that gives latitude to innovate. This requires the public to be confident that industry members will act ethically within the bounds of the legislation. Each time the AI industry goes against these expectations, the right to innovate is put at risk.

 

How accountancy can thrive in the age of AI

big data analytics in auditing

The world is changing at a faster pace than ever, leading chief economist at the Bank of England, Andy Haldane, to state that the disruption caused by the ongoing fourth industrial revolution would be “on a much greater scale” than that experienced during the Victorian industrial revolution. Technology is evolving and infiltrating different industries each day and the era of artificial intelligence (AI) is very much upon us. But do employees risk becoming “technically unemployed” with this rise of technology? Or instead, could accountancy thrive thanks to the rise of AI?

Change is in the air

The adoption of new regulations around mandatory audit firm rotation has stimulated competition in the market and caused real drive for the accountancy industry. The most progressive firms have identified AI capabilities as an important differentiator, but still appreciate that the best practice is a collaborative approach, one that augments human and artificial intelligence.

In the same way that the human brain cannot compute hundreds of thousands of data points in a split second, a machine cannot always understand the and context of real-world accounting. In combination, an accountant fueled by AI is turbo-charged to make faster, more accurate decisions, while having more time to focus on providing guidance, value, and insights.

Enhancing the practice

Although proactive firms are deploying AI to help drive efficiency, reduce risk, and increase quality in their compliance processes, there still remains caution in some parts of the market. Implementing AI to augment and support the practitioners in the accountancy world has shown how this technology can benefit the industry, so why is there still hesitancy? It’s a caution that’s driven by myth, misunderstanding, and misconception regarding the perceived black-box nature of artificial intelligence. Each is an unnecessary barrier to the progress all companies need to make if they’re to compete in the modern marketplace.

Often the adoption of AI tools remains hamstrung by the idea that they cannot integrate with existing technology and are complex to use, and this comes down to a misunderstanding of what’s available. The most effective solutions are affordable and designed to work easily alongside people. They’re designed to demystify AI and make them intuitive to use. Moreover, as regulators take an increasingly tough stance on audit failures, AI solutions are a long-term investment that can reduce risk, increase efficiencies, and improve the quality of financial analysis.

Collaboration, not isolation

In the age of AI, each company must become a technology company in order to defend and grow their market, including the financial industry. It is no longer a question of if the role will change, but how can accountants equip themselves with the necessary skills to thrive in the changing world. It’s time to forge forward and recognize that accountancy actually benefits from the rise of artificial intelligence, unearthing more of the risk in financial data, and providing greater assurances than ever before.

AI is not something for accountancy to fear; it’s something for the industry to embrace in order to enhance auditing practice, increasing accuracy and efficiency.

Click here to find out more about the world’s first and only AI-powered auditor platform.

Answering questions about Ai Auditor

audit analytics examples

As practical applications of artificial intelligence (AI) are new to the finance space, especially with regards to audit, it’s no surprise that the same questions come up across our expert-led webinars. To help you understand how AI is applied to audit, we’ve collected the most common questions and answers here, as provided by our V.P. Growth, John Colthart.

Q: What programming skills or training are needed to use Ai Auditor?

Our goal is to minimize training to make the platform easy to use – a different philosophy from some of the old audit tools you may have used in the past. We designed Ai Auditor to be as user friendly as possible to help you get to maximum value as quick as possible, which means you need no programming or scripting skills to get things done. It’s all drag and drop actions, mapping your data, running the analysis, and viewing results in as easy a manner as possible.

Of course, we do recommend and include training on using the platform itself. Typically, that’s a kickoff with our customer success team to show you around the platform and help you load in that first data set. We give you a few days to play around with the data and reports, then set up a more focused discussion to help you get the most out of the results, such as understanding what control points do and what the machine learning algorithms are hunting for.

Q: Will Ai Auditor replace our existing audit tools or is it in addition to what we use?

The honest answer is that it depends on what you want to accomplish. If you’re just using a working paper solution to gather data to do quick assessments of a trial balance, our platform would absolutely be an addition to what you’re already using. You would use it to go even deeper into the analysis of the data and bring all our reports back into your working papers to have a much higher level of confidence. On the other hand, if you’re using a data analytics tool, especially a visual tool that doesn’t have machine learning built into it, Ai Auditor could potentially be a more effective and easier to use replacement.

We never say it’s one way or the other because every firm we work with has a different view of how technology supports their people and engagements and how they look at things from a line of business perspective, for example M&A, or all the way through to assurance audit and taxation.

At the end of the day, it really depends on the use case but one thing is certain, Ai Auditor is a tool used to help people be more effective at understanding data and gathering evidence, in the capacity that best suits their needs.

Q: Where does all the data that’s being analyzed come from?

We provide a drag and drop interface to load your data and integrate with the most common ERP systems used today, things like CCH Engagement, QuickBooks, Thomson Reuters AdvanceFlow, NetSuite, Sage Intacct, and more, to pull the various types of data we need. For something like accounts payable, for example, we use information from the ledger itself, including the payables register at the end of the period so we can see what’s outstanding and things such as the vendor name and the user hierarchy.

We also eliminate the need to spend time or IT resources on data extraction, manipulation, and ingestion – we take care of all the data heavy lifting so you can focus on the analysis and results.

Q: Does Ai Auditor help with audit planning?

This one is critical to understand: Our platform isn’t just for performing year-end audits, rather it plays an important role throughout the year, including planning. Our interim analysis is always available, going back to whatever period is available from the data, to help you see and understand how the business is transitioning at various points in time.

We support planning in different ways, such as looking at the data to identify and prioritize where you should be spending more time. It could be potential risk in inventories or accounts payable, or really anything that could influence your thinking around how the business is performing. Additionally, we also give you all those control points to show exactly what’s going on in the business and we can help you derive insights from the available data.

We want you to see and drill down into where the risks are at any point in the year, all for the same price as doing a single engagement at the end of the year.

Q: How and where is your data store?

MindBridge Ai cloud services are hosted on a secure cloud infrastructure, with our primary and backup providers fully ISO 27001 and SSAE 16 compliant. Our software stack is designed for defence in depth, deploying redundant controls in the infrastructure, network, platform, and application to ensure no single point of failure.

Q: How secure is the data?

Customer data is always protected, using NIST-approved algorithms (AES 256) and the most secure protocols and implementations available. All network connections are encrypted and all data stores, including primary and backup, are encrypted at all times.

Q: How do you control who has access to what data?

MindBridge Ai has zero access to your client’s data. We maintain SOC 2 compliance and we build in very high security around who can see and perform operations on various types of data, with different levels of hierarchical security. Each Ai Auditor customer has their own dedicated database and storage and there’s no interaction between customers or mixing of data.

At the end of the day, securing your client’s data is paramount and being able to secure that internally – who gets access to what pieces – is also of paramount importance for us.

Q: How do we use the results we get from Ai Auditor and include them as part of our overall processes?

Every report is available in downloadable format, whether it’s images from a screen or some form of data tables. For example, our data can be exported to a Microsoft Excel file and attached as a supporting document to your audit report. In fact, we highly recommend taking all the data we provide and showing them to your client, where it won’t cross independence lines, so they see you as the expert, trusted partner you should be along with the evidence to back it up.

You can also produce reports to share with your end clients including income statements, financial trending analysis, financial analysis, and more.

For more information on Ai Auditor or to book a demo, visit mindbridge.ai.

CPA Firm Taps MindBridge Ai’s technology in Audit as a Competitive Advantage

internal auditing software

An interview with Lisa Zimeskal, CPA, Partner, Hoffman & Brobst, PLLP

According to a survey from the International Federation of Accountants (IFAC), smaller accounting firms are facing significant challenges. Attracting new clients, keeping up with new regulations and standards, and cost pressures versus competitors, were among the top concerns of these firms.

To combat these challenges, Hoffman & Brobst, PLLP, a firm of five partners, decided to embrace artificial intelligence (AI) in their audit services, as a differentiating advantage for their clients, and the firm now use the extensible MindBridge Ai Auditor platform in their audit process.

Ai Auditor is an award winning platform that empowers auditors to detect anomalies in financial data, with speed, efficiency and completeness. The platform leverages expert taught machine learning and AI to ingest and analyze 100% of financial data, as opposed to traditional sampling techniques, to provide higher assurance along with cost savings. Armed with greater insights and boosted efficiency, auditors can focus on what matters most – providing higher value-added services and guidance to their clients.

John Colthart, VP of Growth at MindBridge Ai, recently spoke with Lisa Zimeskal, CPA, Partner, Hoffman & Brobst, PLLP about how AI tools can benefit small firms. Here’s what she had to say.

John Colthart: Tell us about Hoffman & Brobst, PLLP.

Lisa: Hoffman & Brobst, PLLP is a full-service accounting firm in Southwest Minnesota. We provide audit, tax preparation, compilation and review services, in addition to payroll processing and third-party retirement plan administration services.
John Colthart: What do you see as your biggest opportunity?

Lisa: Our biggest opportunity is the continued growth in our industry. We are embracing growth in our firm and we are looking to expand our services when the opportunities arise.

John Colthart: What do you see as the biggest threat or challenge?

Lisa: Our biggest challenge is attracting qualified staff to our practice because of our rural location.

John Colthart: How do you plan to address it?

Lisa: We are currently looking into more options with technology for a remote work environment.

John Colthart: What made you choose MindBridge Ai Auditor? What are the features that you plan to use?

Lisa: We chose MindBridge because we are excited about offering a new value-added service to our clients. This is cutting-edge technology, and it is not something in which others in our area are participating. The entire concept is new to us, but initially, we are planning to leverage the risk-based assessment of transactions. This approach will be enable us to review by-transaction risk in a much more effective and efficient approach than we currently utilize.

The Impact of Artificial Intelligence and Machine Learning on Financial Services and the Wider Economy

business auditor

Recently I was invited to participate as a speaker in the Official Monetary and Financial Institutions Forum (OMFIF) podcast focusing on Artificial Intelligence (AI) and machine learning. OMFIF is an independent think tank for central banking, economic policy, and public investment – a non-lobbying network for best practice in worldwide public-private sector exchanges. This podcast aimed to provide analysis on developments in financial technology, regulation, artificial intelligence and financial inclusion. Below is an excerpt and transcribed version of the podcast.

Interviewer: There is no single definition of artificial intelligence, and it is regularly used as shorthand to talk about everything from chatbots to deep learning. When it comes to financial services, an increasing number of companies across all sectors have been working on creating real-life AI use cases and applications for this range of technologies. The heart of the AI revolution is machine learning algorithms. Software that self-improves it, is fed more and more data. A trend that the financial industry can benefit from immensely. How has AI changed the financial service industry over the past five years and where do you see the greatest application of AI and machine learning algorithms within the financial services sector?

Robin: I think where you see AI being adopted most of all is in places where there are so many big data problems where a normal human can’t cope with the volume and the scale. So, if you take audit as an example, maybe you have a human being looking at transactions to verify if the transactions are good or not and an auditor has to come in and very quickly look at all of the transactions to find out what is going on. One of the coping mechanisms that human beings had for such a situation was doing something called sampling. They take a small set so that they can cope with the volume and verify that those transactions are okay. In that situation, we can train  AI  to look at every transaction and do it in real time as well, and that means that you are not building up a backlog of all transactions to verify. We can codify that human knowledge about what is a valid transaction as well, and we can do that on a vast scale which would not be possible for a human being. So, the biggest disruptive element I think is the ability to codify some degree of human intelligence into these systems and apply them at a vast scale and this is going to cause all kinds of improvements in the quality of activities like auditing. This is applicable everywhere where there’s a lot of data, and there’s a need for us to take some degree of understanding of a problem domain, train an AI system and apply it at scale.

Interviewer: The idea of collaboration is very important between tech and financial services. Robin, as someone who works within an AI company, if you will, what do you see as challenges when it comes to financial institutions adopting the new technology and is there anything that you think can be done to expedite this whole process?

Robin: I see that there are opportunities in building trust in AI and certainly I’ve seen that as one of the big issues that organizations working in the AI field really need to think about. If you think about the types of roles where AI is being used, people like financial accountants, auditors, and even lawyers are being assisted by AI these days and in that environment, quite often they have to justify their actions and what you can’t have is an AI being a black box in that scenario. What you need is an AI system that can explain its workings. I know at MindBridge we spend a lot of time thinking about, as we’re applying algorithms to the areas, how do we explain the findings so that it can support the conclusion. One of the examples is explaining why a transaction is flagged unusual or normal? We took that approach because some of our users can be asked to stand up in a court of law and justify an action that they have taken so they need all of that evidence. So, I think building AI responsibly in a way where they can explain themselves is a very big part of building trust in AI systems.

One of the often-overlooked problems for people working in AI is that they focus on the algorithm and they don’t think about the communication of the outcome. I think that’s one of the big challenges the people who are working in the AI industry need to think about. There is a lot of work going on at the moment in the AI space. Some of the deep learning technology that people are raving about has led a lot of the growth in AI. We need to think about how we take those technologies and turn them into something that the people can understand, and that non-technical people can understand as well. So, I would say that’s one of the biggest barriers to adoption.

Also, smaller firms should be working with the big companies and regulators. A lot of the new technologies are being driven by small, agile innovators and working with regulators or larger organizations helps both sides. From one side the technology matures faster and from the other side you have the awareness of the state-of-the-art, of the possibilities of such technologies are also being conveyed.

To listen to the full podcast by OMFIF, please click the link: https://www.podbean.com/media/share/pb-caaqn-72faa1