undefined

ChatGPT has just completed its first trip around the sun. 2023 was a year completely dominated by the promise of what LLMs can – and cannot do. It has upended how we view a multitude of things – from school coursework, to changing the face of Hollywood acting. ChatGPT set the record for the fastest growing user base in history. But it’s a malleable tool, built and integrated on internet search data.

OpenAI never expected the popularity of their chatbot to skyrocket. In a company wide communication, the employees bet the highest level of users would be 100,000. Plot twist. Chat ChatGPT hit 1 million users within five days.

But, the tool represents merely a single aspect of a more profound transformation. The true story is in the evolving maturity of AI, which opens up novel realms of interaction and uncharted prospects for every type of business. Whether you work in a retail shop tagging items, to building products within some of the largest tech giants of our time.

One year in, organisations must be careful not to be distracted by the bright lights of chatbots, — there are a wide range of valuable use cases, not just bots — while still carefully handling the associated risks, both unintended and intended. Let’s dig into what we’ve learned this year.

Supercharge your AI efforts in line with regulation

Thoughtworks surveyed over 10,000 consumers globally, and their answers reveal that the majority of British consumers (77%) are feeling ‘nervous’ about GenAI adoption in business. In fact, as many as 35% of respondents think that businesses should stop the rapid development of GenAI until effective government regulations are developed.  Businesses need to understand that gaining the public’s confidence through ethical AI is not just a regulatory obligation, it is a strategic competitive advantage.

Building consumer trust is paramount in today’s landscape, and companies that prioritise ethical AI demonstrate a commitment to doing the right thing by their customers. By fostering transparency, ensuring fairness, and actively engaging in ethical AI practices, businesses can not only comply with potential regulations but also differentiate themselves in the market, ultimately earning the trust and loyalty of their customers. In an era where public perception carries substantial weight, ethical AI becomes a linchpin in shaping a company’s competitive edge.

Break the bias of emerging AI

AI has the potential for repurposing away from its original design intent, creating the potential for misuse, especially by creating bias.  Consider internet search engines for example. On a Friday night you Google the best pizza near you, select the first result that pops up, and click ‘order’ without much too much thought.

However, what you might not realise is that the website you chose could be sponsored, and other pizza places in the area might have their own perspective on the best pizza in town. This introduces a bias toward one particular viewpoint. For instance, pre-set user choices or default settings in search engines can subtly sway your decisions, leading you to select sources that may not be the most reliable or relevant.

These bias issues are not limited to online searches but are also cropping up in the adoption of GenAI, affecting areas from hiring processes to mortgage applications. It is important to remember that not all AI solutions are built following the kind of robust engineering practices you’d expect.

Our most recent consumer AI research data found that when respondents were surveyed, 47% stated ‘human societal bias’ as one of their top ethical concerns. Breaking the bias means involving a range of individuals from diverse backgrounds who can look at algorithmic considerations to give feedback to the AI system.

An essential part of this means diving into a range of metrics – not just performance – such as data and fairness. AI is moving fast, so these conversations are becoming more and more commonplace in businesses. Diversity of experience and background is critical in making certain that you are creating solutions that think beyond just one group’s perspective.

The importance of critical thinking 

As AI becomes more advanced and integrated into software and everyday tasks, there is a danger of people relying on it too much in the workplace. For example, a developer might blindly trust AI-generated code, or an autonomous car driver might become overly confident in the car’s autopilot features.

Although ChatGPT has been incredibly useful to individuals and organisations there have been blunders too, with cases of chatbots using queries and customer data to train their bot further, and even inadvertently outputting that (supposedly) confidential data. Both employees and employers are now wrestling with the organisational etiquette of using GenAI.

Ensuring thorough testing of AI models and their associated data is critical. Rushing AI products to market should be avoided at all costs. Instead, adhere to the established and proven principles used in good product development. Importantly, keep in mind that any changes made can be reversed if they do not yield the desired results before the product is launched.

While there are legitimate concerns regarding AI’s impact on employment and potential threats, it’s essential to remember that your employees will be the primary users of this technology. Evaluate how AI will affect your workforce and strike a balance between encouraging them to embrace AI innovation and addressing potential issues such as privacy and intellectual property. To address these concerns effectively, fostering curiosity and engaging with diverse perspectives is vital. Give your employees a voice in determining how AI can enhance their work, as they possess valuable insights into where it can be most beneficial.