Redefining financial audits with AI: MindBridge’s verification journey 

Discover insights from MindBridge Edge 2023 with Adriano Koshiyama of Holistic AI. Learn about the intensive audit of MindBridge’s AI algorithms, our SOC2 & SOC3 attestations, and our dedication to responsible AI.

At MindBridge Edge 2023, we had the honor of hosting a session led by Adriano Koshiyama, CEO of Holistic AI. Known as leaders in AI governance and auditing, Holistic AI conducted an intensive audit of MindBridge’s algorithms. Their assessment was comprehensive, going far beyond the usual metrics. 

MindBridge proudly holds SOC2 Type 2 & SOC3 Type 2 Attestations, both of which are testaments to our dedication to security, trustworthiness, and operational excellence. Beyond these certifications, the specialized audit by Holistic AI validated our stance on using AI responsibly and commended our low-risk profile across various dimensions.  

Adriano’s session provided attendees with a detailed look into the rigorous validation our technology has experienced. For those who couldn’t join us, we’ve distilled the insights below, ensuring you’re well-informed about the advancements we’re making in the AI domain. 

AI ushering the fourth industrial revolution 

AI technologies are rapidly revolutionizing various sectors, from healthcare to finance, acting as the catalyst for what some experts are calling the ‘Fourth Industrial Revolution.’ Alongside the transformative impact of AI, there are complex ethical and governance challenges that organizations must face. This growing complexity underscores the need for robust AI auditing practices. 

Understanding the landscape of AI auditing 

AI auditing is not just about ticking boxes. It’s an in-depth examination of the AI system at multiple stages—from design to post-implementation. Auditors look at a plethora of factors like privacy, fairness, robustness, and transparency. A detailed analysis of the technology stack, code, and data is also necessary. 

Initial steps of an AI audit 

Before diving into the complexities of an AI audit, there’s a critical preliminary phase: information gathering. This involves understanding the AI system’s purpose, the outcomes it aims to achieve, and the team behind its development. This initial step provides the necessary context for a more effective and targeted auditing process. 

Risk factors and key verticals 

AI auditors often begin by identifying potential risk factors within a system. They conduct a risk assessment based on key verticals like privacy, bias, robustness, and transparency. The priority of these verticals may vary depending on the context in which the AI system operates. For example, in the HR tech sector, bias and privacy often take precedence over robustness and transparency. 

Risk and trade-off analysis 

AI auditing isn’t a one-size-fits-all procedure; it involves nuanced trade-off analyses between different risks and metrics. Depending on the nature and scope of the AI system, these trade-offs help auditors and organizations arrive at informed decisions, making the auditing process truly valuable. 

Quick assessment techniques 

You don’t need to be a domain expert to make an initial evaluation. Simple questions like: 

  • Does the system have an impact on life prospects? 
  • Is it used internally or externally? 
  • Does it handle identifiable individual data? 
  • What’s the potential financial damage? 

These questions can quickly point you toward concerns like bias, transparency, privacy, and robustness. 

Transparency requirements under the EU AI Act 

 It’s crucial for organizations to understand that internally developed AI will still be subject to transparency requirements as stipulated by the EU AI Act. The regulatory framework introduced by this act presents considerable challenges and obligations, emphasizing the importance of due diligence and thorough understanding when it comes to AI system development and implementation. 

Addressing the explainability and transparency quandary 

AI systems can be like black boxes, and explainability and transparency are vital for mitigating risks. Auditors examine whether: 

  • The system has a dictionary of variables or data set sheets. 
  • Techniques are applied to build explanations out of the model. 
  • Model cards are utilized to explain how models were developed. 
  • By-design interpretable models are being created. 

Furthermore, recourse interfaces are invaluable. These tools empower users to challenge the decisions made by AI systems.  

But beyond these measures, one critical takeaway from the Edge 2023 event was a piece of advice from Adriano Koshiyama. When seeking to truly understand the intricacies and risks of an AI system, especially if one isn’t technically inclined, it might be most prudent to hire a third-party company to conduct an in-depth audit. This not only ensures a thorough analysis but also provides an unbiased perspective, offering businesses and users a high level of assurance in the AI systems they rely on. And that’s exactly what we did here at MindBridge. 

Assurance: The final outcome 

Assurance is the end goal of the AI auditing process. Different sectors may have different forms of assurance. Third parties often provide assurance, sometimes requiring accreditation from government bodies. In the future, insurance products specific to AI risks could play a significant role. This could take the form of product liability insurance or professional indemnity insurance. 

Governance and accountability 

At the organizational level, AI auditing helps to achieve an adequate level of governance. It starts with interviews and checklists and can progress into a more technical form of assessment where code and data are scrutinized. Accountability isn’t just for the tech department; it should be a company-wide effort. Cross-departmental activities are key to implementing robust AI governance. 

Beyond the basics: Levels of auditing 

There are various ‘levels’ to perform an algorithm audit. These range from whitebox access, where auditors can access all code and data, to more limited blackbox or API-only access. The appropriate level often depends on a risk-based approach. High-risk projects, like self-driving vehicles, may require more in-depth access for auditing. 

AI auditing as the new business norm 

AI auditing is quickly evolving from a ‘nice-to-have’ to a ‘must-have’ in the business world. In the same vein that SOX compliance and ISO standards became non-negotiable benchmarks in the past, AI auditing is on track to become an essential component of business integrity and customer trust. 

The future of AI auditing 

AI auditing is rapidly becoming a cornerstone of sound business and customer care. We’re shifting from a decade where data was the focus, to a decade where the emphasis will be on AI conduct. The 2020s will bring an explosion of new terminologies and practices like AI risk management, AI governance, and AI compliance. 

AI auditing isn’t just a trend; it’s fast becoming a business necessity. As Adriano put it, it’s an ‘exciting decade,’ and we’re just getting started. 

Interested in knowing more about AI auditing? View the entire session on-demand for a deeper dive.

For more insights, you can also refer to Adriano Koshiyama’s academic paper mentioned during the session.