Home Innovation AI in healthcare – regulation and responsibility
Our website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

AI in healthcare – regulation and responsibility

by jcp

By Dr. Pandurang Kamat, Chief Technology Officer, Persistent Systems

The global healthcare services market grew from six to seven billion in 2022 and with global healthcare spending predicted to reach over $10 trillion this year, it’s needless to say that the healthcare technology landscape is one of goliath proportions.

And like any industry, healthcare—and the technology within it—is subject to changing regulation.

Take artificial intelligence (AI) for example. AI in healthcare can be used for a variety of applications, including claims processing, clinical documentation, revenue cycle management, medical records management and even diagnosis.

In laymen’s terms, the use of AI not only speeds up manual processes, freeing up clinicians and healthcare workers to focus on patient care, it can lead to more personalised assessments, safer treatments and in the long run cure more diseases.

But while the benefits of AI are clear, so are the risks when it comes to data privacy and bias and so regulation is key, particularly in an industry dealing with sensitive personal data.

AI regulation as a driver for good

In July, the UK government published its new AI policy paper — Establishing a pro-innovation approach to regulating AI ­.

The paper sets out proposals for a “new AI rulebook” which the government hopes will “unleash innovation and boost public trust in the technology”. And when it comes to healthcare, trust in the technology used within hospitals is essential, as patients will be more likely to disclose information that can lead to better care.

At the beginning of October, the White House Office of Science and Technology Policy also published its Blueprint for an AI Bill of Rights.

The document recognises that automated systems have “brought about extraordinary benefits” highlighting how technology has, among other things, helped improve agricultural production, predict destructive storm paths and identify diseases in patients. But the bill also acknowledges that such benefits should not come at the cost of anyone’s rights.

For us working in the technology sector, understanding the implications of AI is something we’ve been doing for years. Yes, we’ve been focused on the technical standards, the software and the engineering know-how in developing AI technology. But we have done so knowing that AI has the potential to play a huge role in the future of our daily lives, across healthcare, jobs, finances and purchasing decisions, and therefore advocate that it must be used responsibly.

Concerns about AI in healthcare

AI is often referred to as the “electricity” of the new data economy. In a world awash with data, those organisations best placed to succeed aren’t necessarily those with the most data – but those with the best data…and that know what to do with it.

From patient history to labs, scan results, patient intake and discharge forms, every facet of modern healthcare creates data. And unlike other organisations where data can have a relatively short shelf life, medical data needs to be kept safe for the duration of a patient’s life and even after their death to aid diagnosis and/or treatment of relatives or others suffering from a similar condition.

Needless to say, healthcare isn’t just awash with data, it is drowning in it. Consequently, the use of AI to help alleviate some of the pains associated with data collection, management and analysis is a necessity in modern healthcare.

But one of the major concerns related to AI in healthcare is ‘how do we appropriately address security and privacy’.

While the loss of data is, for any company, a nightmare situation, there are far-reaching consequences for patients – ranging from being discriminated against while getting healthcare or health insurance, to being exploited in vulnerable medical situations and even having their medical care subverted by leveraging their sensitive medical and personal data.

Another core concern when looking at AI in a healthcare environment is bias because no system created by humans can be fully immune from bias or discrimination in areas such as gender, race, religion, colour and age.

What’s clear is that as AI becomes more prevalent in healthcare environments, any solutions have to be created ethically, free from unjust biases, and made to be as impenetrable as possible. In other words, you have to create responsible.

Responsible AI is the right-way to do AI

For us at Persistent, our ethics-based approach to responsible AI pivots around five support pillars that form the foundation of our work.


In effect, this ensures that data collection, models and algorithms are standardised for consistency. For example, can the work that is done developing an AI-based system be replicated in real-world scenarios and deliver the same results?


If people are going to ‘buy in’ to AI, they need to understand what the technology is ­doing and how it is arriving at decisions. Explainability and interpretability of AI outcomes are key to building trust in the systems and should be built in from day one.


As AI becomes increasingly embedded in our systems, being accountable for the technology — and what happens as a result of the technology — is paramount. In other words, in the event of a decision being challenged or something going wrong, it’s no good simply blaming the technology. Someone—either personally or an organisation—has to be held to account. A human has to be kept in the AI loop.


With so much sensitive information tied up in AI systems, setting access controls, and ensuring that data is encrypted while maintaining the highest levels of compliance is a must.


And with strict AI security measures in place, the same approach must also be taken to ensure that people’s personal information remains private with data protection methods that meet the latest laws and regulations.

Together, these pillars provide an ethical framework for responsible AI for any organisation developing and implementing software. In fact, I would go further than that. When you are deep into data processing, it’s vital that you factor in and address ethics and any potential bias from the outset.

As software developers, responsibility has to be put ahead of any business goals. That may seem extreme, but it simply underlines the importance of responsible AI. And with more and more healthcare institutions looking to adopt AI processes and governments looking to legislate in this area, it’s something we all have to take seriously.

You may also like