undefined

 

By Martin Taylor, Content Guru’s Co-Founder and Deputy CEO

In 1950, pioneering computer scientist Alan Turing hypothesized a world where machines were able to think like humans, using reasoning and information. By 1997, Artificial Intelligence (AI) was capable of beating a human in chess, ushering in a new era of quasi-cognisant machines. In the past five years AI has quietly spread into every corner of our lives, growing steadily more vital to our daily routines, almost unnoticed. Whenever you ask Alexa, Bixby, Cortana, or Siri a question, AI is working in the background. Within a decade (that most traditional of forecasting terms), the capabilities of AI will surpass even the wildest of today’s predictions. Some experts predict that, even by 2030, AI will enable many jobs to be performed more efficiently, such as hyper-personalized medicine and education.

However, as the late Professor Hawking prophesied: “Alongside the benefits, AI will bring dangers”. Whilst the benefits are wide-ranging and already improving our lives, understanding the risks and challenges surrounding the use of AI-based technology is arguably the most important hurdle left to clear. How should we build consumer trust in AI? How can we give automated technologies space to grow, evolve, and eventually, transform our lives for the better? In this article, Content Guru’s Co-Founder and Deputy CEO, Martin Taylor, will discuss the two key pre-requisites for AI: trust and regulation, and how they are set to transform the landscape of artificial intelligence over the next decade.

Without Trust, Concern Overshadows Anticipation  

Despite the best efforts of the tech giants and futurist enthusiasts, only 18% of Americans said they were “more excited than concerned” about the increased use of AI in daily life. Concerns were also noted by the chief executive of the newly launched National Robotarium, who stated there is unease surrounding technologies used within AI and robotics. At the root of many concerns is a perceived lack of human input, and as such, AI is heavily scrutinized whenever errors occur.  AI has also struggled to earn trust as more threatening forms of AI find notoriety in the headlines. AI can even inherit human biases; discriminating unfairly between applicants, or generating sexist images. Without regulation, these sinister side-effects ultimately undermine trust in the use of AI as a whole and AI becomes a buzzword that sceptics can weaponize to hinder technological adoption.

Turning the Tide 

To succeed, AI needs to win trust, by embedding human consent into everyday processes. Demonstrating how and why an AI reaches certain outcomes is key. Transparency amongst AI developers, organisations and end users is essential to overcoming consumer reluctance. This is especially important for deep learning technologies, such as neural networks. These tools evolve and learn over several generations of trial and error, leading to the black box problem. Opening the black box by simplifying the complex internal processes of AI increases confidence in these systems by making the processes behind AI less opaque. Some developers have already created games to help people comprehend deep-learning AI. Increasing transparency on how data used to inform AI systems are collected, used, and shared, helps to inform consumers and increase confidence in these systems.

Revolution through Regulation 

The second key pillar to establish trust is regulation. Emerging technologies often lack regulation; an unfortunate side-effect being the emergence of bad actors looking to exploit short-term freedoms. Regulation promotes best practice and benefits all stakeholders. Regulation also ensures that companies are held accountable for their actions, which can lead to better and more accessible products and services. A newly proposed European legal framework sorts AI systems by risk; essential services, such as health-dependent AI, would be deemed ‘high-risk’, placing stronger regulatory burdens on the operator. For AI to prosper, companies need to ensure that they have adequate support structures in place for when their AI systems fail or produce unexpected results. Remedial measures include providing effective ways for customers or employees to report issues or concerns about an AI system, so that those responsible can take appropriate action.

One day, we can hope to blur the lines between national and global AI jurisdiction and create a cohesive global policy, setting a precedent for a safer framework that crosses international borders, allowing countries to unite around a shared common interest. Similar to current human rights laws, these would need to be devised and revised to fit every conceivable use case and cover every possible avenue, ensuring political, military or financial objectives can never trump safety.

In summary, the potential for AI to improve our lives is enormous—but so are its risks if used incorrectly or irresponsibly. Companies and other organisations must work together to ensure that AI is used conscientiously, promoting benefits whilst reducing unintended side effects. While this work is happening, businesses need to ensure that they are keeping pace with regulatory requirements by being transparent about how they use AI, especially where there is a risk of real harm, in order to help build and maintain consumer trust.