Technology always comes with a wave of excitement and hyperbole, but few with the headlines and conversation like ChatGPT. The use of artificial intelligence is increasing, helping us with efficiency and quality, but at the same time raising fears. It has gone beyond ‘Will this take my job?’ to questions of ethics and safety.
When using AI in regulated settings such as healthcare, we have to be particularly careful. People working in the sector need to ensure that there are guardrails around conversational AI to ensure that the AI is aligned to ethical standards, clinical validation, privacy, safety, and transparency. These standards are not only essential for compliance, but creating valuable and high quality products and solutions that are also safe.
AI has made it easier for mental health professionals to detect symptoms and triage patients, refine and confirm diagnoses, and customise treatments based on individual needs and characteristics. Virtual health assistants and chatbots are promoting patient autonomy and self-management, thus making healthcare more affordable and accessible. With the ability to look for patterns in large amounts of data sourced from a variety of sources, AI can help predict trends, identify risk, and improve diagnostic precision. It can pick up on both verbal and non-verbal indicators of psychological distress. From the point of view of users and patients, the agility and non-judgmental nature of AI-guided mental health support has allowed it to be usable, meaningful, and relevant to needs across diverse populations. The acceptance we have seen of telehealth over the last few years, accelerated by the pandemic, has created the foundations for a public that is generally ready for AI solutions. In fact we saw in our research into the mental health of employees All Worked Up that 81% of people would rather speak to an AI app than HR, and 53% would choose an app over a therapist.
The worry comes from generative AI – where algorithms use a variety of sources to create new content. AI-powered generative text responses can be potentially unsafe as they are not clinically-vetted or explainable. It’s exciting but organisations need to ensure that they limit the use of AI to applications within a clinically approved algorithmic framework. Appropriate guardrails need to be built around any generative capabilities of AI such that the final output is safe and reliable.
Then there are privacy, security, and regulatory risks. A first step for any regulated organisation seeking to use generative AI such as ChatGPT would be to replace all personal identifying information (PII) with synthetic PII that will allow ChatGPT to provide accurate results based on the complete context without storing any personally identifiable information. After receiving the response from ChatGPT, a layer at the organisation’s end could replace the synthetic PII in the output with the original PII so that the output is meaningful to a patient or clinician. Minimisation of data being shared with a generative AI model is another guardrail that could help enhance privacy and security. Send only that data to the model that is absolutely required to generate the content needed.
Another category of risk stems from the accuracy and reliability of artificial intelligence. We need to ensure that all responses are appropriate and will never lead to risk for a patient or user. Even the best AI has its shortcomings ranging everywhere from training bias to overfitting. Managing this risk requires rigour, discipline, and humility. Responsible providers must test each piece of content for scenarios where AI fails – either in terms of what it is saying or failure to detect appropriate risk. This must then be manually assessed, and failsafe responses must be designed.
Each of these guardrails and restraints will be needed when organisations adopt generative AI models like ChatGPT for patient interactions. Rather than ‘dumbing down’ the capabilities of AI, they help reduce access barriers whilst ensuring that clinical safety and efficacy can have a transformative impact on some of our biggest healthcare challenges.
Research shows that people are more likely to open up to AI than humans, and that AI-guided mental health support and health coaching can create a bond and has efficacy comparable to that of human therapists. We believe that AI-based mental health support as the first step of care is perhaps the only scalable systemic solution to the global mental health crisis. With over half the world living in areas with less than one psychiatrist for every 250,000 people, and the presence of long waiting lists and resource constraints even in developed economies, solutions like Wysa have used conversational AI to deliver therapeutic support that bridges key gaps in healthcare provision. At the same time, the ability to engage with a digital solution on a virtual or mobile platform helps address an essential part of inclusivity and equity in access. For instance, individuals belonging to rural communities, or shift workers, may not have access to mental health facilities at times or locations that suit their needs, and an AI chatbot can be a potential solution.
Looking beyond the hype of ChatGPT, conversational AI technologies can actually help us
solve some of the world’s most pressing health crises. AI as the first step in the care continuum can help bridge the shortage of qualified professionals, and barriers to access related to stigma, cost, and availability, creating equitable access to support at scale.
Uma Rajagopal has been managing the posting of content for multiple platforms since 2021, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune. Her role ensures that content is published accurately and efficiently across these diverse publications.