Our terms of service are changing. Learn more.

Publication

Latest

Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach

Last Updated on October 8, 2021 by Editorial Team

Author(s): Gaugarin Oliver

Artificial Intelligence

AI and Healthcare

Suppose you’re a medical device maker with any presence (sales, operational, or otherwise) in the European Union (EU). In that case, you likely already know the EU MDR — a regulatory regime with more stringent requirements for clinical evidence than its predecessor, the Medical Device Directive (MDD) — came into effect earlier this year.

The bad news? Staying compliant with MDR has meant a great deal more time, expertise, and expense for most companies than anything they’ve dealt with previously.

This resource strain is due primarily to the time-consuming task of finding the correct data to satisfy European MEDDEV (Medical Devices Documents) guidelines, which govern the creation of clinical evaluation reports (CERs). To stay compliant and receive a CE Mark for distribution inside the EU, medical device makers must submit a CER — an unbiased, clinical device assessment based on published and unpublished internal literature — and keep it up to date. All CERs must demonstrate conformity with the Essential Requirements for safety and performance in MEDDEV 2.7/1 Rev. 4 Annex 1.

It all adds up to an incredible amount of work — especially considering MDR’s new requirements around device classification, technical documentation, postmarket surveillance, and other areas.

AI and natural language processing (NLP) technologies are powerful tools that can help medical device manufacturers create and keep comprehensive CERs up to date. But how, exactly?

Let’s find out.

What makes a CER?

At their core, CERs are extremely detailed benefit/risk assessments. They ensure the benefits of each medical device outweigh any potential downsides and can include data related to the specific device or data about equivalent devices.

CER production typically falls into five distinct stages:

  • Stage 0: Scope definition and planning
  • Stage 1: Identification of pertinent data
  • Stage 2: Appraisal of pertinent data
  • Stage 3: Analysis of clinical data
  • Stage 4: Finalization of Clinical Evaluation Report

CERs get submitted to notified bodies (NBs) designated under MDR, who weigh questions such as: Will the device perform as intended? Will it be safe? Is it superior to alternative methods of care? After evaluating these and other queries, the NB then determines whether the device can be sold in Europe or if it requires additional clinical data before approval.

Identifying pertinent data in CERs

To complete stages 1 and 2 above, CERs require a great deal of favorable and unfavorable data around the device sourced internally and externally (via databases such as PUBMED, EMBASE, Cochrane, and Google Scholar).

Companies must submit a complete evaluation of all technical features and testing results of the device as pertaining to both safety and performance, including:

  • Preclinical testing data used for verification and validation (including bench and/or animal testing)
  • Any device failures during testing or otherwise
  • Manufacturer-generated clinical data such as:

a) Premarket clinical investigations

b) Postmarket clinical data

c) Complaint reports

d) Explanted device evaluations

e) Field safety corrective actions

Instructions for use

Other crucial data points for inclusion surround the usability of the medical device, determined through human factor studies, and a review of the state of the art (SOTA) of the relevant clinical field.

Data collection for CERs: Common challenges and mistakes

The process of identifying pertinent data can, however, be misleading at best — and disastrous, at worst — if incorrectly performed. Indeed, the results of manual literature searches can fluctuate wildly due to the competence of search professionals combined with several common errors and challenges, as detailed in a 2017 study by Indani Et al.

Typical data inclusion errors

Indani Et al. detail several common mistakes that can delay or derail any scientific literature search, including:

  1. Errors of inclusion (i.e. too much data). The use of vague or general search terms, unspecialized medical databases, and faulty boolean logic can lead to an overwhelming amount of specific and non-specific information, along with large amounts of semi-relevant, irrelevant, or duplicate data.
  2. Errors of exclusion (i.e. too little data). When search terms are way too specific, or if excluding Boolean logic (such as the “AND” operator) is overused, it can lead to the opposite problem — a dearth of usable data that doesn’t include all relevant information.
  3. Errors of inclusive exclusions. These errors include keyword and data selection bias from those conducting the literature search based on the desired outcome.
  4. Errors of exclusive inclusions. These errors limit results due to the use of extremely exclusive search terms, including not accounting for local dialect and spellings or excluding common synonyms.
  5. Errors of exclusive exclusions (limited relevance). When a search is conducted with bias and too much specificity, leading to a one-sided dataset that doesn’t have enough information.

These challenges and their resulting errors can range in severity, from CER production delays to outright rejection by NBs designated under MDR.

NLP is your secret weapon for CER literature search

For these and other reasons, researchers say automation via AI and NLP is key for improving the efficiency, cost-effectiveness, speed, and accuracy of CER research. “Artificial intelligence and natural language processing based tools with cognitive capabilities provide a near-perfect solution for literature search,” says Indani Et al.

The researchers also say NLP-based automation reduces bias, shrinks the need for multiple searches through algorithm reuse and customization, allows for auto-translation of sources in other languages, and can speed up the process of literature selection and extraction by orders of magnitude, “significantly decrease(ing) the time taken for manual search and filtering appropriate content.

“Automation will eventually reduce the cost, time, and resources (required) in the whole process,” they say. “The combination of a good literature search tool and a trained literature search professional can be the best solution to avoid errors and limitation of literature search.”

CapeStart can improve your CER literature search

CapeStart’s NLP automation, subject matter experts, and smart workflows provide a repeatable, transparent process that makes it easier and cheaper to produce and submit CERs, keep them up to date, and standardize them across all your products and business units. Our NLP models learn from reviewer inputs to continually assess and re-order scientific literature based on relevance — significantly improving accuracy, efficiency, and speed of literature reviews over manual approaches.

That means medical device makers can produce audit-ready, compliant CERs — sometimes in less than half the time of a manual approach — powered by ready-to-go form templates, automated quality assurance, duplicate identification, and identification of common errors such as inclusive exclusions or exclusive inclusions.


Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓