#FutureReadyHealthcare

Who We Are
Investor RelationsNews
Careers
Indegene

Unraveling the potential of Generative AI (e.g., ChatGPT) in Pharmacovigilance

25 Sep 2023
Pharmacovigilance, a vital discipline, ensures patient safety by systematically monitoring and assessing risks and benefits throughout a pharmaceutical product's life cycle. Through real-world data analysis, it identifies safety concerns, guides regulatory decisions, and fosters healthcare improvement by maintaining a vigilant focus on drug safety. Generative AI is poised to transform pharmacovigilance, automate adverse event detection, improve signal detection, expedite vast dataset analysis, and amplify the efficiency of drug safety monitoring.
Generative AI (Gen AI) has captured widespread attention for its transformative potential. As we delve into Gen AI's ability to reshape industries, I've engaged in insightful conversations with life-science innovators. Amidst their cautious enthusiasm, one question resonates: when will Gen AI be operational? I believe it will not be too long at the current pace of development! Gen AI will be ready for real-world production use cases in months if not weeks. With the proper instructions and training, these models can carry out tasks accurately with predictability and consistency.
The innovation is real, and it is here! Healthcare, Retail, Manufacturing, BFSI, Education, we have seen significant momentum across the board. It is a boardroom conversation, unlike other technologies that have emerged so far. Every single business is either experimenting, ideating, or in the construction phase.
Lorem
– Sandeep Alur
Director – Microsoft Technology Center.
So, to explore the potential of Gen AI, Indegene recently hosted a webinar on ‘Unravelling the potential of Generative AI (e.g., ChatGPT) in PV’ and explored the 6 use cases, 4 pitfalls, and 2 considerations for real-world application. Here, I had the privilege of hosting a panel discussion with Sandeep Alur, Director – Microsoft Technology Center, Tarun Mathur, CTO, Indegene, and Dr. Shubha Rao, Senior Director, Safety, Indegene. Our mission was twofold. Firstly, to discuss and inform PV leaders on the applicability of Gen AI in PV. Secondly, to separate reality from the hype surrounding Gen AI and bring you learnings from our first-hand experiences working with Gen AI in Pharmacovigilance (PV).
To gauge our audience’s familiarity with Gen AI, we conducted a poll that unfolded intriguing results: Nearly 1/3rd of the audience is either closely involved with adopting it at an org level at work or has experimented with it at work (Figure 1).

6 use cases of Generative AI in Safety and Pharmacovigilance

At Indegene, we have been running various pilot use cases across the value spectrum of life sciences, specifically within medical and safety areas. We used a combination of Gen AI and other models, which were designed and trained by Indegene PV domain experts to produce reasonably accurate results in simulated contexts that mirror real-world use. These use cases range from enhancing AE intake processes to generating high-quality content for aggregate reports.
Current Challenge
Challenge Description
Use case
Business Gain
1.
‘First-time right and complete’ adverse event (AE) intake
Cumbersome AE intake forms often lack essential details, leading to costly follow-ups.
Leveraging Gen AI, we transformed traditional AE forms into dynamic questionnaires, adapting queries to case specifics and reporter profiles. Responses guided subsequent questions, ensuring comprehensive data collection.
This responsive approach enhances information capture, elevates reporter experience, and potentially reduces the need for follow-ups by 30-40%.
2.
Identifying adverse event and other data fields from unstructured/ free text in case reports
Understanding the context to identify an adverse event in free text is essential. Previously, large volumes of data (10,000+ records) were required to train an AI/ ML model to accurately classify an adverse event (with limited success) in free text.
We used Gen AI models to accurately identify AEs in unstructured text, such as case reports. The use case is extendable to AE identification in other scenarios, such as in literature and in patient or HCP transcripts. It can be expanded to identifying other key information, such as medications, indications, timeline surrounding AE occurrence, etc., from unstructured text.
Gen AI models accurately identify AEs in unstructured text, such as case reports with low-touch, streamlining the AE intake process and improving overall efficiency.
3.
Generating case narratives
Case narratives provide crucial descriptions of adverse event circumstances. Hence, regulators and MAH emphasize authoring high-quality narratives that capture all the information accurately. The structure and content of the narrative vary by context, case type, and sponsor conventions. Therefore, manual authoring is often error-prone.
Using Gen AI, we dynamically generated narratives specific to the case in question and replaced multiple static templates used by case processors or medical writers to author and edit narratives. In addition, Gen AI can also author inferences and conclusions by looking at the case facts based on historical intelligence (and training).
The role of humans shifts to that of a reviewer rather than the author.
4.
Identifying missing information, generating follow-up letters, and auto-intake of follow-up information
Obtaining complete AE information from reporters is crucial to minimize follow-up interactions. However, achieving this completeness in intricate clinical trials and complex therapy cases can be challenging. The process of identifying missing data and creating personalized follow-up queries is laborious. Furthermore, seamlessly integrating reporter responses into case information for more informed and efficient decision-making remains a continuing challenge.
We used Gen AI to act as a business rules engine and identify critical information missing based on the case type. Further, based on missing information, Gen AI generated a follow-up letter with only the relevant questions and probable response options for the reporter to fill out. Once the reporter responded to questions, our Gen AI models could extract key information such as relevant history and outcome from the responses, which could then be updated to the initial case.
It eliminates multiple manual steps and automates a costly follow-up process.
5.
Assisted authoring of aggregate periodic safety reports
A single aggregate report (AR) takes a few hundred hours to author, review, and publish.
We leverage Large Language Models (LLMs) in tandem with other AI/ML models and rule-based systems to facilitate the seamless conversion of tables into textual content and vice versa. It enables the transformation of tabular safety data into meticulously authored reports. Our approach involved identifying pertinent information related to safety topics, analyzing patterns across safety cases, drawing meaningful inferences, and articulating the findings in accordance with approved guidelines and appropriate messaging. While LLMs may not be the sole choice for data analysis, they play a pivotal role in conjunction with other models, such as in formulating precise queries for data analysis. As we continue to explore diverse models for table-to-text conversions, Gen AI emerges as a critical component in creating an automated, high-quality initial draft for these reports.
Potential to reduce authoring effort by >40% with Gen AI-led assisted authoring.
6.
Evidence generation
Information required for signal detection is scattered across multiple resources, making manual efforts more reactive than proactive
Using Gen AI and other technologies, we gathered insights from various resources for better signal detection and evaluation. Insights generated were then presented to users in a consumable format so that they could further probe on interesting evidence.
Shifting from a reactive approach to a proactive one, presenting consolidated information for better signal validation.
Watch the webinar recording embedded in this blog to see Dr. Shubha demonstrate two of the above use cases.
In the early stages, there was a lot of sensitivity, especially around data privacy and security. Now organizations are working with large tech partners to set up a secure environment and infrastructure and run pilots. Healthcare organizations are seeing the same opportunity as other industries. Efficiency improvement and opportunities for augmentation v/s replacement have been a big theme.
Lorem
– Tarun Mathur
CTO, Indegene

4 pitfalls to be mindful of – Risks of using Gen AI in Pharma

We asked the audience about their ease with adopting Gen AI and found nearly half worried about Gen AI adoption in Pharmacovigilance, citing accuracy, trust, patient data security, and integration uncertainty. (Figure 2).
Our panelists addressed 4 potential pitfalls (Image-1) associated with Gen AI. Ensuring quality and consistency of results, safeguarding data privacy and security, establishing trust mechanisms, and maintaining regulatory compliance are crucial factors to consider. Sandeep elaborated on Microsoft's commitment to data privacy and security, ensuring that sensitive patient data remains within the network.
Image 1: Four potential pitfalls associated with Generative AI
While the experiments show promise, there is substantial effort required to productionalize them. Rigorous validation of the results, refining the instructions, and fine-tuning the models will be required to ensure we get consistent and accurate results. Additional quality checks and process redesign will be required while working with Gen AI-based outputs.
Lorem
– Dr. Shubha Rao
Senior Director, Safety, Indegene

Guiding success: 2 considerations for implementation

Today, most PV tasks, be it case assessment, aggregate report authoring, or signal & risk management, are carried out manually to a large extent. While the safety data is growing, drawing meaningful insights and separating noise from data is getting harder. Indegene’s initial experiments show that Gen AI can be instructed and trained to generate reliable and high-quality outputs in these scenarios. The two considerations critical for successful implementation are:
Model Selection and Collaboration: Partnering with domain experts and establishing toolkits and incubation pods are crucial.
Identifying Use Cases and Validation: Testing and validating Gen AI outputs through collaboration with tech and data science teams are essential.
Explore how Indegene Transforms Automated Case Intake and Processing for a Top 3 Pharma by leveraging NEXT Adverse Events Management platform for end to end case processing driven by Artificial Intelligence (AI) and Natural Language Process (NLP) technology for pharmacovigilance automation.
Charting the course forward
Utilizing Gen AI necessitates selecting healthcare-specific models, fine-tuning, and seamless integration. Establishing trust mechanisms and managing change is vital. In conclusion, Gen AI's transformative potential in Pharmacovigilance is promising. Prudent progress, addressing challenges, and fostering trust are indispensable. Collaborating with Microsoft, Indegene is committed to harnessing Gen AI's potential in PV.

Author

Nikesh Shah
Nikesh Shah

Insights to build #FutureReadyHealthcare