A comprehensive guide to technology ethics for your business

Auditor explaining technology ethics in for AI supervision

The use of artificial intelligence (AI) in business operations is growing, with companies like Google Ventures (which conducted a multi-million dollar funding round on Resistant AI) and IBM in the mix. It’s easy to see why — AI and machine learning (ML) empower individuals and organizations to perform complex tasks more efficiently while allowing for deeper analytical analysis than ever before. However, like any technology, it’s important to comprehend what AI and ML are before diving straight into them. Using AI without responsible processes surrounding technology ethics could introduce unnecessary biases and cause more harm than good. AI can speed up production while creating unforeseen legal and technical risks if it’s not fully understood.

Like human workers, AI systems require supervised learning. No matter how entry-level the job is, you wouldn’t simply pull someone straight off the street and throw them into the mix without some orientation and training. AI is no different — it may come with a fancy resumé, but it still needs training data that addresses AI bias.

For example, USC Information Sciences researchers tested two AI databases and found up to 38.6% of the reported facts were biased. These AI systems are at the foundation of how we search the internet, and it’s important to get them right. The National Institute of Standards and Technology (NIST) explains that it runs deeper than biased data, which is why AI needs more transparency.

It starts by examining the biases and ethics of artificial intelligence and how it impacts business and society.

Risks of AI without supervised learning

Supervised learning is an approach to creating AI. A computer algorithm is trained on input data labeled for a specific output. The model is continuously trained until it detects the underlying patterns and relationships between the input data and output labels. This yields accurate labeling results when presented with never-before-seen data.

Of course, the obtainable accuracy level depends on the algorithm and available labeled data. This means the diversity of data available determines how well the AI can forge new use cases. Without enough samples, the model can’t provide reliable results. Training data must be balanced and cleaned to ensure garbage or duplicate data does not skew its understanding.

Microsoft’s Tay Tweets conversational AI chatbot is an infamous example of an AI skewed by biased training data. Within 48 hours of being released from closed alpha into public Twitter, the bot was converted from an innocent teenager into a radicalized conspiracy theorist and had to be taken down.

Paradoxically, high accuracy isn’t necessarily a good indicator of performance. It could also mean the model is overfitted and overturned to a particular training set. This data could perform well in test scenarios and then fail miserably when presented with real-world challenges, as we saw with Tay. 

OpenAI has several ML models proving to be accurate on unprecedented levels because of their access to data. GPT-3 is its open-source AI writer, and Dall-E 2 is its AI image generator. Both use massive datasets (175 billion parameters for GPT-3 and 1.5 billion for Dall-E 2) to generate consistently original content. 

This is why test data must differ from training data to continue pushing the model to learn how to best generalize instead of drawing answers from its experience. At the end of the day, AI machine learning models determine how to use the data they’re presented with, just like humans. And we even infuse our biases into them.

Removing human bias from AI

Over the last few years, society grappled with exactly how much human prejudices find their way into AI systems with devastating consequences. We must prioritize being profoundly aware of these threats and minimizing them when deploying AI solutions.

AI systems learn and make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables like gender, race, and sexual orientation are removed. For example, Amazon stopped using a hiring algorithm after discovering that it favored applicants based on words like “executed’ or “captured” that were more commonly found on men’s resumés.

Flawed data sampling also provides bias, as groups are over- or underrepresented in training data. For instance, MIT researchers found that facial analysis technologies had higher error rates for minorities (particularly minority women), likely because of underrepresented training data. This is a serious problem considering these technologies are integrated into law enforcement, retail, and other public-facing sectors.

Additionally, the researchers and engineers deploying these systems can impart their own AI bias. According to VentureBeat, a Columbia University study found homogenous engineering teams encourage prediction error. 

Two imperatives for action emerge from this growing AI bias:

1. We must responsibly take advantage of the ways AI can improve human decision-making. 

ML systems disregard variables that do not accurately predict outcomes based on the available data. This contrasts with humans, who may lie about or not even realize the factors that led them to decisions like hiring a particular job candidate over another.

2. We should accelerate progress in addressing artificial intelligence bias. 

There are no quick fixes. One of the most obvious is also the most complex — understanding and measuring “fairness.” To this end, researchers developed technical ways to define fairness, requiring models to have equal predictive values or false positive and negative rates across groups.

Of course, the definition of fairness is intangible and different for everybody. It’s challenging (even impossible) to satisfy everyone’s perspective of what’s fair.

Improving AI trust and transparency

Current industry discussion about the use of biased training data has spurred some organizations to be more open about how their systems collect and evaluate data to avoid potential criticism. Some companies show not only that their training data is reliable but they also provide fair and trustworthy results.

AI follows patterns to produce answers beyond the human scope and does so extraordinarily well. But strange alterations in those patterns open the model to a host of vulnerabilities, which is why we need AI transparency in order to know how it reaches a conclusion.

Business leaders must stay current in this fast-paced research field. Several organizations provide knowledge resources, like AI Now Institute’s annual reports, the Partnership on AI, and the Alan Turing Institute’s Fairness, Transparency, Privacy group, that provide a foundation to build on.

Next, establish responsible processes to mitigate AI bias. Consider using a portfolio of technical tools, along with operational practices like internal red teams or third-party audits. 

MindBridge does this, and we know the topic of trust comes up repeatedly in discussions around the advent and growth of AI in audits. That’s why we’re proud to provide assurance and transparency in AI as the only industry organization that has completed comprehensive audits on our algorithms.

Learning more about AI standards

International standards are quickly starting to catch up and address technology usage in auditing. Organizations must be aware of how ethics rule changes can also affect how auditors and organizations address technology use. A draft from the International Ethics Standards Board for Accountants explores specific factors when examining technological reliance. And MindBridge is at the forefront of anticipating and meeting these standards.

As the world’s leading financial risk discovery platform, MindBridge AI already meets the proposed standards and has undergone independent third-party validation. It is certified SOC 2® and ISO 27001, has been rigorously tested by various organizations (including the ICAEW Technology Accreditation Scheme and Montreal Declaration for a Responsible Development of Artificial Intelligence) and can help auditors and other financial professionals meet any pending industry standards.

AI and ML are fast-paced and powerful technologies revolutionizing every aspect of business. They’re increasingly being adopted in auditing and unlocking efficiency throughout workflows. However, there are deep-seated ethical concerns, and it’s important to always maintain AI trust and transparency. You need a deep understanding of technology and data to train your humans and AI properly.

In accounting and auditing practices, AI has the power to be transformational, but with increasing scrutiny on controls and audit approaches, some may be reluctant to adopt a technology they don’t understand.

Check out our webinar on explainable AI to dive deeper into how artificial intelligence can improve your organization’s risk analytics.