MindBridge named in Gartner Autonomous Accounting Hype Cycle
Logo of MindBridge

Ethical AI goes beyond legal AI

internal audit sampling

The recent case of the Statistics Canada project to use personal financial data from banks to study the spending habits of Canadians provides a very clear lesson in the ethics of AI. In this case, Statistics Canada has clear legal authority to request and use this data and it’s very likely that the proposed project conforms with ethical standards for AI and analytics. There is also an excellent case that this project will provide significant public benefit. However, it’s also clear that the project failed to gain a moral license from Canadians and by failing in this regard, they have put the project and perhaps their freedom to operate at risk.

Shining a light on the project

At this point, the details about the project are difficult to come by and I have not seen evidence of any public consultation or public notice of the project. This project came to light through a news story published by Global News on Oct 26, 2018. Based on the news reports and a bias towards the general good intentions of government bureaucracy, we can infer that Statistics Canada finds its current survey-based approach to collecting data on Canadian spending habits deeply inadequate. I also expect that the bureaucrats involved saw the opportunity to provide a more accurate picture of Canadian spending habits, more efficiently, and with less burden on the members of the Canadian public. After consulting with Justice, they also determined that they have the legal authority to do so and they honestly believe that Canadians by and large trust Statistics Canada with their personal data. So they made the decision to use the legislation governing Statistics Canada and request data from the banks. I also expect that bureaucrats knew that this request could be misunderstood by the public so they decided to act out of the public eye, trusting that the banks would comply without fuss. Of course, this project will benefit the banks greatly.

What possibly could go wrong?

Application to analytics and AI

I want to stress that there was no malice in the bureaucratic intentions behind this project. To the contrary, I see the motivations as things we want to encourage: innovation, efficiency, improved quality, and Canadian competitiveness. Where things may have went wrong is a long-standing bureaucratic culture of secrecy. The causes and solutions to this problem with bureaucratic culture is a topic for another day.

No doubt there will be calls for changes to the Statistics Act but I think cries for wholesale changes are misguided. Overall, the Act provides a good example of a legal framework for analytics. I’m not saying that events such as this should be ignored, rather the justice department should be tasked with reviewing the act and regulations with the goal of  improving the legislation — perhaps by making public consultation mandatory when Statistics Canada wants to collect personal data indirectly.

Legislative frameworks for analytics and AI must do a few things well:

  • They must protect privacy
  • They must ensure that the collection and use of personal data contributes to the general social welfare broadly defined
  • They must protect the ability to innovate

On this last point, legislative frameworks must be flexible and protect against egregious misuse while relying on social and market mechanisms to align activity with public expectations. Authority granted by legislation must protect against the right to innovate being blocked by a radical few. By these tests, the Statistics Act stands up well.

Having legal authority to do something is not the same as acting morally or ethically. In general, ethical use of personal data requires that the data subjects explicitly consent to the collection and use of their data. One can assume that the data subjects have given a license to the analytics organization to use their data for the intended purposes but, in practice, this is complicated and there are exceptions to this approach. One such exception is that the use serves the public good. From what I understand of the proposed use of data by Statistics Canada, this test is clearly met.

How we can do better

So what went wrong? The personal data in the possession of the banks was created as part of delivering banking services. The public expectation, perhaps naively, is that that is the only use they have consented to. The attempt by a third party to access and use this data to develop profiles of consumer spending habits goes well beyond their expectations. In this case, the legal authority to do this is irrelevant and disturbing. At the very least, a public education campaign describing why this is important to Canada and Canadians and how each individual will be protected in the process would have gone a long way to easing the public’s concern.

More fulsome consultation and offering individuals with the ability to opt out would likely have eliminated all barriers and created a positive opinion of the project. Each time an organization tries to fly under the radar when accessing large quantities of personal data, they create a risk of public backlash that will saddle the industry with stifling regulation.

The AI industry needs the right to ethically innovate and to do this, we need a regulatory environment that gives latitude to innovate. This requires the public to be confident that industry members will act ethically within the bounds of the legislation. Each time the AI industry goes against these expectations, the right to innovate is put at risk.