Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach
Last Updated on October 8, 2021 by Editorial Team
Author(s): Gaugarin Oliver
Artificial Intelligence
Suppose youβre a medical device maker with any presence (sales, operational, or otherwise) in the European Union (EU). In that case, you likely already know the EU MDRβββa regulatory regime with more stringent requirements for clinical evidence than its predecessor, the Medical Device Directive (MDD)βββcame into effect earlier thisΒ year.
The bad news? Staying compliant with MDR has meant a great deal more time, expertise, and expense for most companies than anything theyβve dealt with previously.
This resource strain is due primarily to the time-consuming task of finding the correct data to satisfy European MEDDEV (Medical Devices Documents) guidelines, which govern the creation of clinical evaluation reports (CERs). To stay compliant and receive a CE Mark for distribution inside the EU, medical device makers must submit a CERβββan unbiased, clinical device assessment based on published and unpublished internal literatureβββand keep it up to date. All CERs must demonstrate conformity with the Essential Requirements for safety and performance in MEDDEV 2.7/1 Rev. 4 AnnexΒ 1.
It all adds up to an incredible amount of workβββespecially considering MDRβs new requirements around device classification, technical documentation, postmarket surveillance, and otherΒ areas.
AI and natural language processing (NLP) technologies are powerful tools that can help medical device manufacturers create and keep comprehensive CERs up to date. But how,Β exactly?
Letβs findΒ out.
What makes aΒ CER?
At their core, CERs are extremely detailed benefit/risk assessments. They ensure the benefits of each medical device outweigh any potential downsides and can include data related to the specific device or data about equivalent devices.
CER production typically falls into five distinctΒ stages:
- Stage 0: Scope definition andΒ planning
- Stage 1: Identification of pertinent data
- Stage 2: Appraisal of pertinent data
- Stage 3: Analysis of clinicalΒ data
- Stage 4: Finalization of Clinical Evaluation Report
CERs get submitted to notified bodies (NBs) designated under MDR, who weigh questions such as: Will the device perform as intended? Will it be safe? Is it superior to alternative methods of care? After evaluating these and other queries, the NB then determines whether the device can be sold in Europe or if it requires additional clinical data before approval.
Identifying pertinent data inΒ CERs
To complete stages 1 and 2 above, CERs require a great deal of favorable and unfavorable data around the device sourced internally and externally (via databases such as PUBMED, EMBASE, Cochrane, and Google Scholar).
Companies must submit a complete evaluation of all technical features and testing results of the device as pertaining to both safety and performance, including:
- Preclinical testing data used for verification and validation (including bench and/or animalΒ testing)
- Any device failures during testing or otherwise
- Manufacturer-generated clinical data suchΒ as:
a) Premarket clinical investigations
b) Postmarket clinicalΒ data
c) Complaint reports
d) Explanted device evaluations
e) Field safety corrective actions
Instructions forΒ use
Other crucial data points for inclusion surround the usability of the medical device, determined through human factor studies, and a review of the state of the art (SOTA) of the relevant clinicalΒ field.
Data collection for CERs: Common challenges andΒ mistakes
The process of identifying pertinent data can, however, be misleading at bestβββand disastrous, at worstβββif incorrectly performed. Indeed, the results of manual literature searches can fluctuate wildly due to the competence of search professionals combined with several common errors and challenges, as detailed in a 2017 study by Indani EtΒ al.
Typical data inclusion errors
Indani Et al. detail several common mistakes that can delay or derail any scientific literature search, including:
- Errors of inclusion (i.e. too much data). The use of vague or general search terms, unspecialized medical databases, and faulty boolean logic can lead to an overwhelming amount of specific and non-specific information, along with large amounts of semi-relevant, irrelevant, or duplicate data.
- Errors of exclusion (i.e. too little data). When search terms are way too specific, or if excluding Boolean logic (such as the βANDβ operator) is overused, it can lead to the opposite problemβββa dearth of usable data that doesnβt include all relevant information.
- Errors of inclusive exclusions. These errors include keyword and data selection bias from those conducting the literature search based on the desiredΒ outcome.
- Errors of exclusive inclusions. These errors limit results due to the use of extremely exclusive search terms, including not accounting for local dialect and spellings or excluding common synonyms.
- Errors of exclusive exclusions (limited relevance). When a search is conducted with bias and too much specificity, leading to a one-sided dataset that doesnβt have enough information.
These challenges and their resulting errors can range in severity, from CER production delays to outright rejection by NBs designated underΒ MDR.
NLP is your secret weapon for CER literature search
For these and other reasons, researchers say automation via AI and NLP is key for improving the efficiency, cost-effectiveness, speed, and accuracy of CER research. βArtificial intelligence and natural language processing based tools with cognitive capabilities provide a near-perfect solution for literature search,β says Indani EtΒ al.
The researchers also say NLP-based automation reduces bias, shrinks the need for multiple searches through algorithm reuse and customization, allows for auto-translation of sources in other languages, and can speed up the process of literature selection and extraction by orders of magnitude, βsignificantly decrease(ing) the time taken for manual search and filtering appropriate content.
βAutomation will eventually reduce the cost, time, and resources (required) in the whole process,β they say. βThe combination of a good literature search tool and a trained literature search professional can be the best solution to avoid errors and limitation of literature search.β
CapeStart can improve your CER literature search
CapeStartβs NLP automation, subject matter experts, and smart workflows provide a repeatable, transparent process that makes it easier and cheaper to produce and submit CERs, keep them up to date, and standardize them across all your products and business units. Our NLP models learn from reviewer inputs to continually assess and re-order scientific literature based on relevanceβββsignificantly improving accuracy, efficiency, and speed of literature reviews over manual approaches.
That means medical device makers can produce audit-ready, compliant CERsβββsometimes in less than half the time of a manual approachβββpowered by ready-to-go form templates, automated quality assurance, duplicate identification, and identification of common errors such as inclusive exclusions or exclusive inclusions.
Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI