By Matt Peake, Global Director of Public Policy at Onfido
Biometrics has quickly become embedded in our everyday lives, particularly facial biometric technology which is commonly used to unlock smartphones and access online applications. With 8 out of 10 people finding biometrics both secure and convenient, it’s seeing widespread adoption across financial services.
Biometric verification is powered by artificial intelligence (AI) systems which rely on models trained with data. This enables them to recognise, categorise and classify facial images very quickly and accurately. With 68% of large companies in the UK having adopted at least one AI application, the technology has real consequences for real people and therefore must be built properly.
For this reason, it has to be subject to ethical parameters as it is developed and implemented. Within financial services, this is particularly important given banks and payment service providers are the gateway to financial inclusion and services based on trust.
There are six key considerations typically associated with ethical AI: fairness and bias, trust and transparency, privacy, accountability, security, and social benefit. If just one of these fails, it can have serious consequences for individuals and businesses. This includes financial exclusion, delayed innovation and growth, and lack of regulatory compliance.
Delaying or ignoring the issue, or passing the responsibility to engineering, compliance or legal teams is no longer an option. Leaders within organisations, no matter the department, must take an active role in seeking to address the flaws in their applications and be accountable for the performance of AI that they deploy.
Why is ethical AI so important?
AI is used across multiple functions of finance from fraud detection and risk management to credit ratings, and so plays an essential part in the processes that underpin everyday life. If AI is not ethical, it damages trust in the system and erodes the value of financial services.
Currently, when issues with AI automation arise, human intervention is often the solution. But a manual fallback isn’t always the best answer as humans are prone to systemic bias.It is commonly understood that bias exists in systems seeking to distinguish faces of people from ethnically diverse backgrounds. This can lead to the development of non-optimal products, increased difficulty expanding to global markets, and an inability to comply with regulatory standards.
Where discrimination occurs, the consequences can be severe and include alienation from essential services. This is why Onfido takes a proactive stance to reduce bias, having published guidance based on defining, measuring, and mitigating biometric bias, and also participated in the UK’s Information Commissioner’s Office sandbox to pioneer research into data protection concerns associated with AI bias published its report.
Elsewhere, ethical AI is at the heart of regulation. The UK’s AI Governance regulations and the EU’s AI Act outline how trust should be at the centre of how businesses develop and use AI. Not only will it be a requirement for financial services to follow the considerations of ethical AI, but it will be central to future growth. There is also an ongoing requirement for compliance with Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations, holding financial institutions accountable on how they verify customers’ identities. With an investment in ethical AI, financial services will improve the accuracy and reliability of their KYC processes and reduce false acceptance and rejection acceptance rates across the board.
Implementing ethical AI
There’s no doubt that ethical AI is an evolving challenge that requires financial services to stay on top of their applications as new use-cases emerge and deployment grows.
Developing and deploying ethical AI should be a company-wide initiative. It requires a top-down commitment to ensure ethical practices are embedded into every stage of application development and implementation. Without such an approach, it can be all too easy to fall behind on the challenges of developing and maintaining ethical AI and encounter issues that could otherwise have been prevented. To achieve optimal outcomes, businesses must bring teams together to identify problems, define and formulate solutions, implement them, and then track and monitor their progress.
Executive teams must understand the risks of developing AI that is not ethical and the long-term financial and reputational repercussions it could have. But they must also recognise that ethical AI is the gateway to innovation, driving accurate and efficient financial services that can lead to positive social outcomes – for the benefit of all customers, no matter who or where they are.
The impact of ethical AI
By following the six considerations of ethics, financial services firms can help meet their regulatory obligations, build fair, transparent and secure systems, and demonstrate their ongoing commitment to protecting their customers.
Failure to address ethical considerations, however, runs the risk of causing long-term issues. It can lead to products and services that exclude customers, and may ultimately result in non-compliance with regulations. Embedding ethical considerations into AI development and implementation will ensure that customers are treated fairly, while financial services can protect and improve their brand reputation and build trust with their customers. When creating and using AI technologies, we must ensure that they operate fairly for all individuals and ensure that privacy is respected and upheld.