Emotion AI: ensuring the next wave of AI is more aligned with our emotions
Sara Saab, VP of Product, Prolific
Emotions serve as our guiding force as human beings – but the full complexity and range of human emotions still eludes even the top scientists. Some believe there are fewer than six basic emotions – and yet others argue there’s many more.
As the AI revolution continues, understanding human emotions is crucial. The discourse surrounding ‘emotion AI’ has certainly gained momentum, and we’re seeing this reflected in a whole subset of AI focused on discerning human emotions from facial expressions, voice nuances, body language, and other physical cues.
The evolving sophistication of this technology opens new avenues. Of course, AI will continue to be used in marketing, but effective emotion AI will allow companies to capitalise on personalisation trends for consumers, by opening up more R&D routes into human behaviour. Additional applications are being seen in sectors like healthcare, which demands a heightened level of emotional intelligence. For example, AI could even support therapy chatbots, but only if the technology can offer a nuanced and sophisticated human-like rapport and personality to achieve maximum efficacy.
This is important to get right. The ability of AI and large language models to empathise with human feelings and intentions directly correlates with their capacity to serve us effectively – and without this, could Let’s explore some of the challenges and opportunities.
Getting to the heart of it: the challenges of emotion AI
Today, emotion AI promises benefits in real-world scenarios, but faces challenges in understanding the true breadth of human emotions. As a result, groups of policymakers in the European Union and United States are arguing that the technology should be banned.
It’s true that emotions are too complex to be understood through technology alone. They demand human input. Yet a growing number of platforms that market their ability to finetune AIs through human intervention are actually harnessing AI to do this important work.
Now, new research into methods like Reinforcement Learning Through AI Feedback (RLAIF) are showing some merit by allowing AIs to help humans train other AIs at scale. However, unbounded and unchecked, this practice conceivably leads us to an unrecognisable outcome: ‘artificial artificial intelligence’, as Jeff Bezos puts it – a snake eating its own tail. Without learning from human feedback, AI models will make choices that do not reflect the emotional values and needs of diverse populations. It will also put users off; AI needs to feel human for it to appeal; for it to be normalised.
Regulators should therefore direct their attention to poorly developed emotion AI, rather than touting an outright ban. Knee-jerk policies will not serve us in this complex new world, and a ban would fail to consider the numerous benefits of emotion AI.
Enhancing emotion AI through feedback loops is key
Even if scientists spend the next few decades struggling to agree on the definition of human emotions, there are still significant advancements to be made in the field of emotion AI.
Key to developing the emotional intelligence of AI is to use ‘human-in-the-loop’ (HITL) training. HITL, of which Reinforcement Learning by Human Feedback (RHLF) is the current exemplar, requires people – not other AIs! – to provide feedback on information generated by AI. These human annotators rank and rate examples of the AI’s output based on its emotional intelligence. For instance, how empathetic was this AI chatbot? How natural sounding was its response? How well did it understand your emotions?
The responses to each of these questions requires considered feedback. Real-life annotators must therefore be compensated fairly for their time and – importantly – come from a diverse range of backgrounds. After all, there’s no single-lived human experience and diverse life experiences are what shape our emotions.
If we can meet these criteria, then emotion AI will continue to show improvements and efficiency gains over time. The models will not only make choices that reflect the full range of emotions felt by humans but offer emotionally mature responses that meet the unique needs of the user. In short, only human annotators will be able to help machines learn from lived human experiences, supporting them to make better and empathic decisions.
Integrating emotion AI, guided by Human-in-the-Loop (HITL) training, emerges as a promising avenue leading not only to inclusivity but also to the elevation of universal human dignity. Through its ability to assimilate a myriad of human experiences, AI evolves into a more empathetic companion, laying the foundation for enhanced AI-to-Human interactions.
This transformative technology not only benefits individual relationships with machines but also holds the potential to catalyse improvements in Human-to-Human interactions on a broader scale. As we navigate the evolving landscape of artificial intelligence, the infusion of emotional intelligence into technology serves as a catalyst for fostering a more interconnected and compassionate society.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.