

By: Nigel Cannings -CTO at Intelligent Voice.
As of January 2022, around 15% of UK businesses had adopted some kind of artificial intelligence (AI) technology. This translates to 432,000 companies, while a further 12% were either piloting or planning to adopt the tech in the near future.
AI is being deployed across a vast array of sectors and applications. From financial services and HR to healthcare, saving businesses time and money. However, while there is no questioning the benefits offered by AI, there are many questions surrounding the ethics behind the technology, particularly in relation to bias.
The problem of bias in AI was publicly brought to light in 2018 by the scandal surrounding Amazon’s sexist recruitment tool. While the problem has been understood for some years now, finding a solution has proven difficult. So, why is unbiased AI so important and what can be done about it?
Why is AI bias a concern?
The primary reason that we simply can’t leave AI bias unchecked is that the potential for the technology has become so huge. The more processes AI is integrated into, the more dangerous bias becomes. In 2019, for example, America’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software was called out for racism when it was discovered to be finding black offenders more likely to re-offend than white. This software was used to make vital decisions linked to an offender’s future, including whether they should be made eligible for parole.
If you train a model on intrinsically racist data, it only reinforces the issue and further misguides these decisions. If AI bias is not tackled, we will eventually reach a stage where the situation becomes dangerous. What’s the solution?
How does AI bias happen?
Whether it is used for the sorting of invoices or the diagnosis of cancer, AI can only work if it is trained on previously labelled data. This means that data that has been previously assessed as pertinent by a human operative are fed into the machine’s system. The machine then identifies the patterns relevant to its decision-making process. It is able to do this by taking in a degree of detail that would be almost impossible for even the most highly trained human to do. Unfortunately, however, if the data fed into the system contains any form of bias, the machine will learn to replicate those biased opinions. So, white males are better suited to office jobs, and black people are more likely to re-offend. These biases were entirely unintentional, but they manifested anyway with extremely negative repercussions, because of issues with the data, not the algorithm
So, can there be a solution? Without human-labelled data, we could not train AI models, but using it in its raw state gives rise to bias reinforcement. . In 2021, it was discovered that AI could accurately identify a person’s race based solely on an x-ray, and researchers struggled to explain why. And if we don’t understand how this has happened, it is all but impossible to stop it from happening again.
How can we debias AI?
Right now, there is no way to train AI without the use of human-labelled data. While this remains the case, bias is always going to be an issue and the focus right now needs to be on transparency and explainability.
What many people find frightening about reliance on AI for decision-making, is that it can be impossible to understand why that decision has been made even for minor issues, such as a declined credit card application. If you can’t find out why that decision has been made, it can be upsetting, but when it comes to more serious scenarios, such as the potential for AI to be used in healthcare settings, that lack of information could be devastating.
Explainable AI (XAI) starts to bring transparency into the process. It allows operatives to literally ask the software why it reached a certain decision and what it based its results on, allowing the reasoning to be scrutinised and any training data cleansed or removed This ensures that the same mistake isn’t repeated in the future, actively removing bias from the process, at the same time, enabling businesses, managers, and doctors to justify any decisions that are made and adhered to. There are also proactive techniques to pre-cleanse training data by removing data that could be a race or gender identifier, for example, based on a name, or even a college. A 2021 study showed that job candidates with “Black-sounding names” were less likely to be called for an interview than their white counterparts, which is the type of bias that scuppered Amazon’s hiring tool.
Legislation already exists in some countries to protect consumers against the worst ravages of untrammelled AI. In the EU, the GDPR data protection regulation already states that everyone has a right to have automated decisions explained. However, until XAI becomes baked into how we train and analyse models, many organisations will struggle to comply and AI bias will continue unchecked.