5 Ways AI Scales the Effectiveness of Literature Reviews, From Clinical Evaluations to Precision…
Last Updated on June 19, 2021 by Editorial Team
Author(s): Gaugarin Oliver
5 Ways AI Scales the Effectiveness of Literature Reviews, From Clinical Evaluations to Precision Medicine
Systematic literature reviews (SLRs) are a vital component of modern health care, especially today, as the ever-growing amounts of scientific literature available become harder and harder to analyze using conventional methods.
In a previous blog post, we’ve already discussed the importance and challenges of performing SLRs on time. The challenges are massive, including research question and inclusion rules formulation, the mining of large research databases (including full-text data identification and extraction), along summarizing and synthesizing all that information.
We’ve also discussed in other posts how AI techniques such as natural language processing (NLP) help identify and extract PICOTS (population, intervention, comparison, outcomes, time, and study design) elements. PICOTS is a model used by researchers to develop well-informed research questions.
- Clinical evaluation reports (CERs) for medical devices
- Target identification
- Drug repurposing
- Precision medicine
We’ll discuss each use case, along with some of the ways AI and ML can help improve efficiency, timeliness, and accuracy for researchers, below.
Clinical evaluation reports (CERs) for medical devices
Medical device manufacturers must perform clinical evaluations during the entire life cycle of a device, including evaluating its technical specifications, instructions for use, the potential for risk, and evidence on biological safety. The clinical evaluation report (CER) uses clinical data concerning the safety and performance of the medical device (and any similar devices) to prove its benefits outweigh the risks.
Because of the large amounts of data and considerable rigor involved, this risk-benefit analysis is similar to that of a systematic literature review (SLR) process. However, another similarity between CERs and SLRs is that they both present several challenges to researchers.
The CER process involves several steps, including:
1. Defining the scope. The first step of every clinical evaluation report is to formulate a clinical evaluation plan (CEP), defined by MEDDEV 2.7.1 (Rev 4), to define scope, methodology, and criteria. Any changes in design, materials, or manufacturing processes should be included, along with any recently identified clinical concerns.
AI and ML techniques can help speed up the CEP development process, allowing manufacturers to continuously conduct and document the evaluation process under industry regulations such as MDR 2017/745.
2. Identifying relevant data. This includes evidence from pre-and post-market clinical investigations, risk management activities, preclinical studies, complaints regarding safety and performance, and post-market surveillance (PMS) reports.
AI and ML techniques can quickly identify data from various sources and assemble it in the required format. CapeStart can work closely with multiple groups within the manufacturing company to obtain these data to ensure evidence is available in time for inclusion in the CER, especially for CE-mark renewals with specific deadlines.
3. Literature search and report writing. As in the case of SLRs, an exhaustive literature search can take the most time out of all the various CER steps and be even lengthier if researchers aren’t skilled in Boolean and other search operators.
NLP-based technology can quickly develop the most effective search protocol possible, compliant with MEDDEV 2.7.1 (Rev. 4) and drawing on a range of published sources, including Pubmed, Embase, and Cochrane. NLP solutions can perform active learning, apply limits, and even rank results by relevance based on search criteria — reducing days and even weeks of manual work.
But aside from the complexity and time-consuming nature of the steps above, several additional challenges often hinder the CER process. Finding experienced people with the right know-how can be challenging, for one thing: Locating qualified researchers up-to-date on clinical evaluation regulations, medical writing standards, literature screening, and data analysis in combination with proficient knowledge in ML and NLP techniques isn’t easy.
Creating a compliant CER with sufficient clinical evidence is also extremely time-consuming for manufacturers. Done manually, CERs can take months to complete. An NLP-aided technology solution paired with expert subject matter experts and machine learning engineers can process large volumes of data incredibly quickly, providing accurate and responsive guidance that reduces the time, cost, and effort required under a manual approach.
All pharmaceutical companies must monitor scientific literature for adverse events, both to comply with regulations and as part of their baked-in pharmacovigilance process. Like with SLRs, this process is highly time-consuming — especially given the exponential growth in scientific literature.
But manual approaches are often error-prone and slow to complete, and a lack of thoroughness can sometimes even lead to trouble with regulators.
The traditional process consists of several steps, including;
- Articles from multiple journals manually scanned and a shortlist of relevant articles for further investigation created
- A trained pharmacovigilance professional reviews the shortlist for adverse event information meeting the reporting criteria
- Articles with reportable adverse events forwarded to case processing and reporting teams
This time-consuming process generally nets a low number of adverse events during each review cycle. But when powered by AI, pharma companies can use pre-trained models to integrate data from multiple sources and quickly identify relevant adverse events, saving research teams substantial time and effort on large analysis projects.
NLP-based solutions can scan mountains of unstructured data to find relationships between certain drugs and adverse events, allowing human experts to focus on more value-added tasks.
Barring a completely serendipitous discovery, developing (and reaping the benefits from) a first-in-class drug means first identifying a new drug target before going down a very time-consuming and expensive road. Literature reviews play a crucial role in this regard, especially when it comes to “understanding target biology and links between the target and disease states.”
An advanced NLP-based solution, however, goes further than simple lexical recognition. It interprets unstructured text through an understanding of syntax, semantics, and other layers of analysis, saving valuable time through:
- Mining of abstract & full-text literature mining
- Identification of target-disease associations in documents
- Ranking of target-disease associations based on a confidence score
- Genes and proteins further analyzed for enrichment in gene ontology (GO) terms, pathways, Medical Subject Heading (MeSH) terms, and protein-protein interactions (PPIs) to identify highly relevant targets
- Functional enrichment analyses by comparing identified targets among various search results
Also known as drug repositioning, drug repurposing helps pharmaceutical companies identify new therapeutic uses for drugs already in existence. As one can imagine, repurposing a drug is light years less expensive than discovering and developing a new drug. It can also piggyback on previous clinical trials.
But traditional manual approaches are still time-consuming and typically involve four steps: Compound identification, compound acquisition, development, and FDA post-market safety monitoring.
An NLP-based technology solution can extract potential new applications of existing drugs through extensive literature review, exploiting any existing drug-disease knowledge to scan literature resources systematically:
- Extracting relevant documents from a range of data sources
- Identifying study types or categories to prioritize drug-disease pairs
- Comparing a drug’s signature (such as its transcriptomic, structural, or adverse effect profile) with that of another drug or disease phenotype
In this way, NLP-based approaches can quickly and accurately identify disease-gene, gene-drug, and disease-drug relationships, improving the odds of drug repurposing success while cutting the time, effort, and cost typically associated with these projects.
Also known as personalized medicine, this relatively new approach to medicine capitalizes on the variability in genes, environment, and lifestyle among individuals to tailor treatment plans accordingly. Exhaustive literature reviews are a significant element of precision medicine, but cutting through large amounts of data to find only the most relevant articles is a huge challenge.
An NLP-based approach can sidestep the immense time commitment required by traditional search methods by identifying causal genes and rapidly extracting actionable insights from multiple data sources. NLP models learn patterns from unstructured text, identify entities within the text (including any relationships or associations with other entities), and extract a wide variety of entities, including genes, gene variants, chemical/drug names, species, cohort types, or diseases for further analysis.
How CapeStart can help
CapeStart’s ML and NLP engineers, data scientists, and subject matter experts can help research organizations scale themselves, improve efficiencies, and stay compliant when performing virtually any literature review on any type of medical or scientific literature.
Our proprietary, NLP-based SLR solution semi-automates the SLR process, from research question formulation to meta-analyses, to evidence mapping. And it’s backed up by a team of experienced medical writers, who use the insights provided by our AI and ML tools to produce boardroom-ready clinical evaluations and other reports.
5 Ways AI Scales the Effectiveness of Literature Reviews, From Clinical Evaluations to Precision… was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI