The emergence of artificial intelligence (AI) enabled applications have the potential for societal disruption, especially with the advent of generative AI (GAI) tools. There is a growing concern about the potential for sophisticated use of GAI to create deepfake videos, images, audios, and text that manipulates and/or fabricates content with the potential to spread misinformation, deceive individuals and organisations, and manipulate public opinion. This online behaviour can lead to the further erosion of trust and credibility in various contexts, such as news, politics, online interactions, and even among friends be they persons or nations, thus leading to a fragmentation of society.
The fact that these applications are called “Artificial Intelligence” does not mean they are intelligent, certainly in a high primate sense; nor are they sentient. Today’s AI is based on algorithms that are designed to process massive amounts of training data using sophisticated statistical analysis and presentation techniques. We should not apply human qualities such as integrity and ethics to label these applications rather apply these qualities to the way that they are being developed and the outcomes associated with their usage.
There is a fundamental difference between the human qualities of integrity and ethics. Integrity is an internalization of beliefs such as being honest and fair, is absolute and lies at the core of the human psyche or anima. Integrity manifests itself through human ethical behaviour, which is a cultural set of rules and ideas that have evolved over time against an expanding framework of moral principles. Ethics is an externalization shaped by the cultural environment.
GAI has a vast potential to create both positive and negative impacts ranging from individuals to society as a whole. The net result is a segmentation of the world into winners and losers. So, who are they and what are the implications? There are three broad groups of stakeholders at play in the GAI space: product and service suppliers, users, and potentially governments as some kind of regulator.
The winners
Experts anticipate a significant economic boon to the world economy with projections of additional trillions of dollars to the world GDP based on primarily and secondary usage of GAI based systems. Aside from financial gains, there will be many benefits to society as a whole as a direct result of newly developed products and services to improve the lives of individuals, the value of companies, and the efficiency of governments.
The current biggest winners are the principal platform providers, with their expanded ecosystems of acquisitions of mainly smaller technology-based companies, as well as their strategic and tactical partnerships including with value-added resellers.
The next big winners, based on revenue and productivity, are the early business and individual adopters of GAI including business users such as accountants, lawyers, medical practitioners, business and policy analysts as well as and others such as writers, researchers, educators and students.
The third category of “winners” are unfortunately the emerging range of nefarious actors who use GAI with the intent of influencing others for a wide range of selfish and/or ideologically motivated intents from fraud to espionage. Clearly this category operates in a world where integrity is irrelevant and disruption is a purpose.
The losers
On one hand the suppliers and mainstream users of GAI currently appear to operate with integrity; however, given the potential for dual use and abundance of nefarious actors, cracks in the social fabric are starting to appear (e.g., lawyers being duped into sighting fake legal cases in court; the creation of faulty financial statements; essay and exam cheating by students and professionals, and insidious ramblings from chatbots). In addition, serious concerns about job security are arising as per the summer of 2023’s American writers’ and actors’ labour dispute, which in part is due to the threat of GAI displacing jobs by machines. Given the potential harm that GAI could rain on civilization, there is a range of threats with impacts that vary from the individual to society at large.
The link between integrity and AI involves ensuring that AI algorithms, all data and associated applications operate with integrity and in an ethical manner by respecting human rights, privacy, and fairness. These are essential in order to build trust and reliability towards AI systems and a way to help minimizing malicious uses of AI. All publicly facing AI applications, be they public or private sector, should adopt the use of explainable AI, a concept where the logic of an AI decision is explainable as opposed to a common form of AI that is ‘black box’ whereby the logic associated with a given output cannot be explained.
Most AI systems rely on vast quantities of data for training and decision-making, much of which is scrapped from the internet; thus, leading to a new kind of legal problem with respect to ownership and use between AI vendors and owners of the data. Moreover, if the training data are intentionally manipulated or biased, it can lead to compromising integrity. Adversaries can inject misleading or distorted data into the training process to influence the behaviour or outcomes of AI models, leading to such actions as incorrect decisions, biased results, and challenging the integrity of the user with misleading and/or disinformation.
AI-powered chatbots can be programmed to manipulate or deceive users. These chatbots can be designed to impersonate humans, spread misinformation, or manipulate emotions to exploit individuals’ vulnerabilities including data theft, financial fraud, and social engineering attacks such as bulling.
AI algorithms can be manipulated or intentionally biased to achieve specific outcomes or objectives. For instance, in social media platforms or online advertising, AI algorithms can be manipulated to amplify certain content, manipulate user behaviour, or reinforce echo chambers. This can undermine the integrity of online information, distort public discourse, and manipulate user experiences beyond the user’s intent.
AI algorithms employed in decision-making processes, such as loan approvals, hiring decisions, or criminal justice applications, can inadvertently perpetuate or amplify existing biases and inequalities in an AI-enabled application. If these algorithms are not designed with integrity in mind, they can lead to unjust or discriminatory outcomes, consequently undermining the integrity of the decision-making process.
Individuals whose personal data are used by AI systems without proper safeguards could be compromised. Lack of data protection measures, unauthorized data sharing, or insufficient transparency about data usage can erode privacy rights and compromise individuals’ control over their personal information.
AI can be employed to invade privacy by analyzing and mining vast amounts of personal data with AI algorithms that can be used to infer sensitive information about individuals, such as their preferences, habits, or personal details, even without explicit disclosure. This intrusion into personal privacy compromises the integrity of individuals’ information and can lead to misuse, unauthorized access, and character and extortion attacks on individuals and even organisations from which data were inappropriately acquired.
Today, AI algorithms are widely used in stock markets for algorithmic trading. However, they also can be manipulated to gain an unfair advantage. Malicious actors can employ AI techniques to manipulate stock prices, engage in market shenanigans, or conduct high-frequency trading with the intent of exploiting market vulnerabilities.
If integrity is not prioritized in AI development and deployment, society as a whole could suffer. Unethical AI practices can lead to societal harms, reinforce power imbalances, and erode social trust. If AI systems are not designed with integrity and ethical considerations, marginalized communities can be disproportionately affected.
Biased algorithms or discriminatory practices can reinforce existing disparities and exacerbate social inequalities. It is crucial to ensure that AI technologies prioritize fairness, inclusivity, and equal opportunities for all segments of society.
Furthermore, GAI algorithms can be used to develop automated hacking tools that exploit vulnerabilities, bypass security measures, as well as conduct targeted attacks. These kinds of activities can compromise the integrity of network and computer systems, applications and data as well as individuals, organisations and infrastructure that become victims of attacks.
AI can disrupt jobs, not just manual jobs many of which are being taken over by AI-enabled robotics from shop floors to warehousing and retail; but also many kinds of knowledge worker jobs across virtually all professions. This is leading to workforce displacement and job insecurity. In that context, if integrity considerations are not adequately addressed, the impact on affected workers, their organisations and clientele may be disruptive. Organizations that fail to manage the job displacement transition and provide support for affected individuals can expect social and economic challenges to the organization and disruptions in the community.
The winners and losers in the context of AI and integrity are not fixed or predetermined. The impact can vary based on the actions and decisions taken by stakeholders across different sectors. Individuals who lack awareness or understanding of AI risks and the importance of integrity may be at a disadvantage. They may unknowingly fall victim to biased or discriminatory AI systems or be affected by privacy breaches. Lack of knowledge or control over AI systems can limit their ability to protect their interests and challenge unfair or harmful AI practices.
The exploitative concerns with GAI arise due to the malicious use or manipulation of AI technology and do not inherently stem from AI itself. They are caused by the intentions and actions of those who utilize AI for nefarious purposes. Responsible development, ethical considerations, and appropriate safeguards can help mitigate these risks and preserve integrity in AI applications. By prioritizing integrity, responsible AI practices, and ethical considerations, stakeholders can strive to create a more equitable and beneficial AI landscape for all.
The need for guardrails/regulation
The release of powerful AI-based chatbots in recent months witnessed an unprecedented level of market penetration reaching over one hundred million internationally distributed users in a matter of weeks from launch. Yet these products and related services are being released without any external safeguards, which can lead to a vast array of illicit uses of GAI.
If AI systems are deployed without proper integrity measures, public trust in institutions and organizations utilizing AI can erode. Instances of AI failures, breaches of privacy, or unethical use of AI can undermine trust in those responsible for AI development and deployment. Rebuilding trust can be challenging and require significant efforts to re-establish integrity and transparency. Prioritizing integrity in AI development and deployment can help mitigate risks, maximize benefits, and create a more equitable and responsible AI ecosystem for all stakeholders.
Implementing and preserving integrity in AI systems points to a need to invoke some kind of guardrails or regulatory framework and system of rules. Adhering to regulations promotes integrity by ensuring that AI systems meet specific standards, respect legal requirements, and operate within established boundaries. By following these principles, AI can be developed and deployed in a responsible, transparent, and beneficial manner.
Governments around the world are taking notice and invoking discussions on what kind of rules are needed and how to coordinate internationally. Recently, the US Government and major U.S. high tech companies developing AI based platforms, agreed to the establishment of a voluntary set of guidelines and guardrails when developing such systems. Ensuring ongoing integrity requires AI systems to be subject to continuous monitoring, auditing, and evaluation. This would involve regularly assessing the performance, fairness, and ethical implications of AI algorithms and applications as well as the quality of any data added to an AI system.
AI & integrity in conclusion
If AI systems are developed and deployed with integrity as a priority, society can benefit from improved services, enhanced decision-making, and innovative solutions. Fair and unbiased AI applications can ensure equal opportunities, reduce discrimination, and enhance access to resources and services. Ethical AI can also empower individuals with greater control over their personal data and foster transparency in decision-making processes.
By integrating integrity into the development, deployment, and use of AI systems, organizations can ensure that AI technologies are aligned with ethical principles, societal values, and human well-being. Emphasizing integrity in AI fosters trust, reduces risks, and paves the way for the responsible and sustainable integration of AI into various aspects of society.
Embedding integrity into AI systems requires a multidimensional approach, encompassing technical, organizational, and ethical considerations. It involves a commitment to ethical principles, transparency, fairness, accountability, and responsible governance to ensure that AI systems benefit society while upholding human values.
Without adhering to developing GAI systems with built-in integrity nor an approach cast against an ultimately international regulatory framework, society will become the biggest loser. The results will be unpredictable and could even lead to the demise of ethical principles that were established over many centuries to enable proper world order. By prioritizing integrity in AI deployment and decision-making, society can harness the potential of AI while upholding ethical standards and ensuring a fair and just future.
Note: The opinions expressed here are those of the authors and do not necessarily reflect the views of the organisations we are associated with.
About the authors
Eli Fathi, a Member of the Order of Canada, is the Chair of the Board at MindBridge. Retired as CEO in 2022, Eli champions ethical AI, mentorship, diversity, and corporate social responsibility. As a serial entrepreneur, he has co-founded companies that employ over 400 people and generated over $600 million in economic benefits. Eli mentors emerging leaders, shares business insights on his blog, My Take on Business, and serves on multiple boards, including C-Com and the NRC Advisory Board on Digital Technology.
Peter K. MacKinnon is the Managing Director of Synergy Technology Management and a Senior Research Associate in Engineering at the University of Ottawa. He is a member of the IEEE-USA Artificial Intelligence Policy Committee. With a background in management consulting, Peter specializes in program evaluation, entrepreneurship, and public policy related to science and technology. He lectures on disruptive technologies and business models at the University of Ottawa, contributing to both academic and policy-making spheres.