Specials

Responsible AI for Responsible Health Content

By Dr. Swadeep Srivastava

 

Increasingly, Artificial Intelligence (AI) and Generative Artificial Intelligence (AGI) are getting sharper, sophisticated in supporting doctors and other medical professionals. The potential for AI in healthcare is huge, as it has the ability to apply problem-solving techniques that humans could not do alone or the process consumes time.

Generative AI, on the other hand, refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.

Though AI and AGI have gone through many loops of hype and promises in healthcare, it has delivered when it comes to reading X-Ray images, speedier diagnosis and patient management. But when it comes to generating responsible healthcare content, the field is new and evolving.

People are increasingly surfing the internet for health information. In a recent article, Susannah Fox, former CTO for the U.S. Department of Health and Human Services, points out that 87 percent of 14- to 22-year-olds report that they research health questions online.

According to the most recent Pew Research Centre’s Internet and American Life Project, 35% of U.S. adults say that at one time or another they have gone online specifically to try to figure out what medical condition they or someone else might have. 63% of online health seekers were looking for information about specific diseases or medical problems. 47% were looking for information about a specific treatment or medical procedure. Additionally, 44% searched diet information, and 36% were looking for information about exercise and fitness.

According to Google, women are more likely than men to go online to figure out a possible diagnosis, reflecting their traditional role as family caregivers. Men are more likely to search for information relating to sexual health, drugs, alcohol and smoking. Many people conducting these searches are often doing it on behalf of another person rather than themselves, such as a spouse, child, etc.

At the same time, a recent survey found that 40% of scientists are unfamiliar with the use of AI in healthcare. However, 86% of healthcare providers, life science companies, and tech vendors use AI.

As people depend on information on the internet, creating content assumes importance. It is here that AI can be used as a tool to generate content.

In a paper titled ChatGPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations by Alexander S Doyal David Sender, Monika Nanda, and Ricardo A Serrano, Artificial intelligence (AI) language generation models, such as ChatGPT, have the potential to revolutionize the field of medical writing and other natural language processing (NLP) tasks. It is crucial to consider the ethical concerns that come with their use. These include bias, misinformation, privacy, lack of transparency, job displacement, stifling creativity, plagiarism, authorship, and dependence. Therefore, it is essential to develop strategies to understand and address these concerns.

Important techniques include common bias and misinformation detection, ensuring privacy, providing transparency, and being mindful of the impact on employment. The AI-generated text must be critically reviewed by medical experts to validate the output generated by these models before being used in any clinical or medical context. By considering these ethical concerns and taking appropriate measures, we can ensure that the benefits of these powerful tools are maximized while minimizing any potential harm.

AI can help healthcare writers with tasks such as taking minutes from online meetings, creating action lists, and generating content. However, some say that AI cannot replace human writers because AI-generated content can lack the same quality and value as human-written content. AI-generated content can also contain bias, misinformation, and privacy issues.

When it comes to using AI regularly for creating content, we need to distinguish between static and dynamic healthcare content. AI or AGI has no major role to play in static content. For example, descriptions of diseases, symptoms, cures etc need no help from AI. The patient information put out by CDC, National Institute of Health, WebMD, Mayo Clinic and others on the Net are of gold standard. The content has been carefully and meticulously vetted by senior medical professionals whose credentials are unquestionable. AI or AGI cannot better this; and it is pointless and needless. No use reinventing the wheel.

But when it comes to dynamic content, AI and AGI have a role to play. Dynamic content is generated when there is breaking news in the field of healthcare. Examples could be new research, a new paper or a finding. It could also be a new surgical procedure.

In such cases, AI and AGI can help in digging out past or related information on the topic and collate them. This could help health writers fashion their own content before it is put out for publication or the public.

Says Premangshu Ray, senior journalist and author: “Extreme caution should be exercised in this regard. One should remember that AI and AGI only collate, rephrase and rewrite information already available. So, if the information is wrong, skewed or dubious, it may reflect in the outcome”.

It is worth noting that there is an overload of healthcare ‘information’ on social media. There was a huge burst of infodemics during the Covid pandemic period and much of the content is still questionable, including some put out by experts. If AI were to trawl all this to generate new content, the result may be disastrous.

There is also a potential for digital ‘naughty boys’ trying to have some fun by posting information that looks genuine but may ultimately be a prank – a sort of Deep Fake. Recently there was an interview with a senior doctor that turned out to be fake.

There is also a potential threat of biased writers posting articles to run down an individual, hospital or research institution.

In the paper published in Cures, Alexander S Doyal et al says while AI-generated text can offer numerous benefits and enhance various aspects of medical writing, we must approach its use with great caution and mindfulness. The advantages of efficiency, productivity, and support in generating content must be weighed against potential downsides like bias, misinformation, plagiarism, and privacy concerns. As AI technologies continue to advance rapidly, it is essential for the medical community, policymakers, and society as a whole to continually grapple with the ethical implications and challenges posed by AI-generated text.

The responsible use of AI in medical writing necessitates clear guidelines, robust validation processes, and close collaboration between AI systems and human expertise. Transparency and acknowledgment of the role of AI in generating text are vital to ensuring that human authors remain accountable for the final output. Additionally, ongoing research and development are required to address bias detection, misinformation prevention, and privacy protection in AI-generated text.

This does not mean that AI and AGI can be written off. These tools have great potential if used with skill and caution.

There is a way out. Health writers who use AI to generate an article should visit and revisit the write-up and ensure that it is fair, accurate, responsible and understandable. There should be no ambiguity or contradictions.

The touchstones of healthcare content writing are accuracy, evidence-based facts and lucidity.

Ultimately, the question of whether we should use AI-generated text in medical writing will persist. The answer lies in our ability to strike a delicate balance between leveraging AI’s potential while respecting the importance of human creativity, critical thinking, and ethical considerations. As we navigate this evolving landscape, it is crucial to maintain a thoughtful approach and prioritize the well-being of patients, the integrity of medical knowledge, and the overall advancement of healthcare practices. By doing so, we can harness the power of AI while upholding the highest standards of medical writing and patient care. (Ack: Alexander S Doyal et al).

 

(The author is Dr. Swadeep Srivastava, Founder & Chief Belief Officer of HealthPresso, and the views expressed in this article are his own)