By Rupam Davé, partner at Harbottle & Lewis
Introduction
Artificial intelligence is having something of a ‘moment’, but its way forward is laden with unknowns. For many businesses, AI offers the opportunity to vastly scale, optimise and generally improve operations at virtually every level. For others, AI has the more sinister potential to create risk, devalue R&D and produce a new generation of security threats. In this article, we’ll take a look at some of the ways in which artificial intelligence can be harnessed by businesses and what legal checks can be deployed to help mitigate the risks posed by the use of artificial intelligence tools.
What is AI?
The most basic question on AI is one of the hardest to answer. In fact, there is no universally accepted definition of ‘artificial intelligence’ and the expression ‘AI’ covers a number of different concepts. Broadly speaking, AI is the ability for a machine to emulate intelligent human behaviour, but this as a complete concept ultimately remains technologically a long way off. Most instances of AI in the commercial world relate to a series of smaller scale algorithmic or trained behaviours.
One of the most common methods of developing artificial intelligence is machine learning. This involves software being ‘trained’ on large volumes of data such as video, audio and text. As the software ingests more and more of this data, it improves its ability to provide accurate results. Well-known artificial intelligence tools developed using this methodology include:
- ChatGPT (by OpenAI): a chatbot which uses natural language to answer text-based questions posed to it. It can help businesses in a variety of ways including with research and writing code.
- Stable Diffusion (by the CompVis group at LMU Munich): an image creation tool which can produce images reflective of the user’s text-based input.
- AlphaCode (by DeepMind): an AI powered tool which can code computer programs at a similar rate to professional human programmers.
How AI is being used in business
For those readers worrying if this decade will see the birth of Skynet, we can at least rest assured that artificial intelligence is not yet remotely close to replacing humans or replicating human-level intelligence. That said, the technology which already exists can still help us automate and improve many aspects of our personal and business lives. For instance, look at how:
- Online businesses like Amazon and Netflix use AI algorithms to help make better shopping and viewing recommendations.
- Banks routinely use AI to check credit card purchases to minimise fraud.
- Sales and marketing teams use AI to monitor and triage customer messages.
- HR teams use AI to help write job descriptions, answer candidate questions and automate annual leave and other routine requests.
- Computer programmers use AI to help produce new code as well as check existing code for bugs.
Legal tips when considering an artificial intelligence tool
- Contracts and Licensing
Artificial intelligence tools are essentially software products. Consequently, if you are looking to use AI in your business, you need to put in place an appropriate software licence which explicitly allows you to use the AI product for your relevant business purpose. Without the right licence in place, you may find you don’t have the right to use the software as you want and could face expensive claims for unauthorised use.
Additional considerations for AI licences:
- You should never assume that because the software is being provided for free, you can use it however you want. Often software is provided on a trial or personal licence which will prohibit business use.
- Some free software tools are provided under open source licenses. These licenses should always be reviewed carefully to ensure they don’t include onerous terms (e.g. some open source licences require software and data created using the tool to be made publicly available without charge).
- Intellectual Property Rights
When using an artificial intelligence tool it’s vital that you understand who owns all the intellectual property in play. Below is a list of questions you should be able to answer by looking at your software licence (and if you can’t find the answers, this probably means the licence needs some work and an amendment may be needed):
- Who owns the tool?
- Who owns the data that you’ll input into the tool?
- Who owns the outputs from the tool? What rights or restrictions apply in relation to these outputs?
- Will there be any third party intellectual property used by the tool in creating the outputs, e.g. images taken from the internet?
- Is that third party intellectual property being used lawfully by the tool?
- Do you need to be aware of any risks associated with that third party intellectual property (for example what will happen if a third party alleges something you create using the tool infringes their work)?
- Data Protection and Security
As with all technology projects, it’s also important to think at the outset about the data protection and data security aspects of using new technology. If personal data is being processed by the tool, appropriate steps should be taken to ensure this is done in accordance with applicable data protection legislation. It’s very important that employees do not upload personal data into an online tool without appropriate legal and technical safeguards having first been put in place.
Additionally, robust security due diligence should be carried out if the tool is coming into contact with your own technology infrastructure or commercial data to avoid creating vulnerabilities which could be exploited by hackers.
- Regulatory Landscape and Ethics
We are at the beginning of the legal journey for AI, with many legislatures around the world still only in the early stages of thinking about the rules, regulations and codes of practice that should apply to the use of AI. If you’re looking to deploy AI in a meaningful way in your business, you should consider whether your sector or country already has specific laws or codes in place (or indeed whether any are on the horizon) which could impact the AI you intend to use. Any such regulatory requirements should be considered before making significant investment into an artificial intelligence tool.
You should also review the ethical issues arising from the use of AI. Many artificial intelligence tools have already drawn criticism for producing discriminatory or otherwise biased outputs, which may also be hard to spot as they can be deeply entrenched within a tool. Proper due diligence on both the tool and the vendor will help reduce these risks, but ultimately you will need to maintain robust governance and monitoring procedures when using AI to ensure any issues are picked up early and dealt with appropriately.
About the Author
Rupam is a partner at UK law firm Harbottle & Lewis LLP.
He helps businesses navigate their way through transformative commerical and technology projects. Rupam has particular expertise in advising on emerging technologies, artificial intelligence, the cloud, data protection, strategic commercial arrangements and the growing convergence between the technology sector and other areas.