Thinking of using ChatGPT in your business? Read this first
By Dr Kendra Vant, Executive General Manager of Data at Xero
The world can’t stop chatting about artificial intelligence (AI). In December 2022, OpenAI’s ChatGPT took the world by storm with its surprising “human-sounding” answers to questions, ranging from the practical to the bizarre. Since then, there has been a wave of new applications built on top of ChatGPT and Google’s Bard technology, promising to help people do things faster, better and smarter.
In what feels to most like the blink of an eye, ChatGPT has transformed how businesses can read, write, code and create. It can produce content way faster than people can consume it. But this comes at a hidden human cost. The use – or misuse – of generative AI apps like ChatGPTor Stable Diffusion – for both text and images has sparked fierce debate around the legal and ethical issues related to copyright, privacy, transparency and bias.
There are also growing concerns of its impact on the workforce and the economy as a whole. To regulate or not to regulate? While the decision to regulate AI will be left to governments to figure out, businesses – particularly those with ESG reporting requirements – will need to weigh the pros and cons for their employees, customers and operations, to determine the best path forward.
A resurgence of ethical dilemmas
For many professionals working in AI, ChatGPT is not quite the astounding ‘came from nowhere’ phenomenon it can seem to the rest of the world. ChatGPT belongs to a class of AI techniques known as generative modelling, building on mathematical foundations laid in the 1960s. Therefore, many of the AI dilemmas that are being widely discussed today are actually decades old.
One of the oldest and biggest challenges making its way into courthouses across the world is around copyright and the ownership of content. Generative AI models like GPT are trained to mimic the relationships that exist between words in human language, hence the technical name of Large Language Models or LLMs. To train these LLMs, the usual approach involves scraping the massive volumes of text on the internet.
Who owns copyright in content generated by apps like ChatGPT? And if a person or business uses ChatGPT to generate content that infringes on existing copyright , who is liable for the infringement? It is not clear cut on who owns AI-generated content.
ChatGPT and similar tools are also re-igniting debates surrounding “fair AI”. How do we make sure the decisions shaped or made by AI are fair and perceived to be fair? At the heart of the problem is that algorithms calculate optimal models from the data they’re fed. We all remember earlier fiascos like Microsoft’s derogatory chatbot Tay and Amazon’s gender-biassed recruiting tool. There is a risk that content is authored from essentially a single view point fueled by the data that is ‘in the ether’ today.
A practical approach to AI ethics
Over the years, there have been various national and international efforts to create and enact laws regulating AI. In November 2021, 193 countries adopted the first ever global agreement on ethics of AI, as set out by the United Nations. All those involved in AI developmental efforts, such as AI developers, are in theory accountable to these AI principles. How this might be enforced is quite a different story.
Many governments are moving to establish principle-based guidelines to solve issues on data privacy, fairness and trust, as demonstrated by the European Union’s Ethics Guidelines for Trustworthy AI.
What about business? How should they balance the risks and benefits that AI brings? As Stanford Fellow Dr Lance B Elliot says “keep your eyes and ears wide open, and keep your mind right- side up when it comes to where AI is heading and the heady role of those Large Language Models.”
Ahead of any significant legislative change, it is up to businesses to put the time, resources and energy into building the right guidelines and frameworks for the use of AI that best suit their needs and the interests of their stakeholders. Here are some tips to consider.
- Verify and validate – be sure you fully understand the outputs from these tools and have reviewed them for accuracy and appropriateness before incorporating them into your work and processes. When using AI-generated content, be sure to verify with other sources and check for bias.
- Reinforce privacy and security – be extremely careful about what information you share with AI technologies, particularly when you are using a service that you do not pay for. There can be serious privacy and security implications if you share customer data or other commercially sensitive information with these tools. For public services, where you don’t have a commercial agreement that states otherwise, assume all data you enter becomes part of the public domain. This means only entering data or prompts that are able to be shared externally.
- Establish a culture of accountability – ensure that principle-based guidelines are accessible and consistent across your business. If building on top of ChatGPT, ensure you first decide on the values and business objectives that they want to achieve and check that you have a commercial business model that supports them when the cost of the third party service is factored in.
And remember, if it sounds too good to be true, it probably is. Over the next 12 months, we’re likely to see a wave of new AI applications accompanied by eye-catching, headline-grabbing promises. Don’t be afraid to pause and take time to determine first if the benefits outweigh the risks before jumping on the AI train. By taking a proactive but cautious approach, you won’t be left behind at the station or risk that train getting derailed down the track.