The AI safety summit: A need to focus on near-term data risks and boost investment in UK tech
Steve Elcock, Founder and CEO of elementsuite, the leading SaaS AI in HR and Workforce Management platform, shares his opinion on this week’s AI Safety Summit 2023
As a nation, the UK has a superb history of shaping the future. But it’s essential that we put in place regulation of the uses of AI by which we can as a country manage AI’s potential and risk in equal measure. This week’s AI Safety Summit is a strong starting point for this industry discussion and a chance to boost our UK tech investment as we edge closer to responsible use of this powerful technology.
Data bias and current AI threats
With an agenda that focuses on the more dangerous and malicious uses of AI, leaders must ensure they have responsible data strategies to mitigate some of the biggest near-term risks which are often underestimated, such as data bias in AI. Data bias refers to the presence of systematic and unfair inaccuracies or prejudices in data. This can lead to incorrect, discriminatory, or skewed outcomes. Data bias has already emerged as a problem, showing up in organisations as discrimination, loss of talent and innovation, and legal and regulatory risks.
Organisations must be able to identify, mitigate, and prevent bias in their data and algorithms to ensure fairness, equity, and diversity within the workplace. This means investing in diverse and representative datasets, using “fairness-aware” machine learning techniques, and ensuring transparency in their data collection and modelling. Continual monitoring and auditing of systems and data are vital to detect and correct bias as it arises.
Another particular challenge that AI presents is around privacy in the context of GDPR, as the “right to be forgotten” cannot easily be implemented in AI models. In security terms, there’s a need to ensure “AI as UI” opportunities are counterbalanced with increased security in relation to future voice and facial recognition technologies as these can be spoofed via deepfake.
Strategising to mitigate AI risks
There are some key considerations for the UK Government to make when strategising to ensure safer AI systems, which include:
- Assembling an expert AI taskforce – This squad of AI professionals should be multi-dimensional across the industry to include AI researchers, neuroscientists, legal experts, philosophers, sociologists and psychologists; only the widest range of industry representatives will suffice
- Discussions must involve open source – It’s not enough to have support from the big players, such as Google and OpenAI. Independent academic papers must also be considered as key sources for information. The open source community is strong and must also be a critical voice in AI risk mitigation.
- Investing in UK tech will power our AI expertise – Giving greater support and investment to the growth of UK tech in general will drive the growth of AI expertise. We seem to be behind in understanding the importance of tech companies in general, which shows up in growth funding availability. Bridging the gap between academia and industry with strong UK academic partnerships (as in the US) will develop the right skills the UK is lacking, namely data scientists, prompt engineers, and model trainers.
- Prioritising critical considerations for privacy and safety – Accepting that there may be a chance of runaway artificial superintelligence in the future, there is a greater need to prioritise the real near-term AI risks. These include data bias, the real possibilities of deepfakes and impact on older, less tech-savvy generations becoming vulnerable to privacy and security issues.
- Shaping regulation around AI in data – Effective regulation will involve spending as much on “adversarial” AI as “generative/positive” AI. A sensible approach is to force an ISO standard that makes organisations document their policies, guidelines and business practices, much like the ISO 27001 standard for information security management systems.
The congregation of global industry experts at the AI Safety Summit has undoubtedly sparked the necessary discussions on AI responsibility in line with the rapidly advancing technology. However, whilst it has begun to kick off some tactical discussions, the connection between regulatory starting blocks and the tech agenda in the public domain are yet to be discussed. As the summit continues, it is crucial that real and imminent resolutions for many of today’s big challenges in an evolving AI world are addressed.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.