Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach
Latest

Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach

Last Updated on October 8, 2021 by Editorial Team

Author(s): Gaugarin Oliver

Artificial Intelligence

AI and Healthcare

Suppose you’re a medical device maker with any presence (sales, operational, or otherwise) in the European Union (EU). In that case, you likely already know the EU MDRβ€Šβ€”β€Ša regulatory regime with more stringent requirements for clinical evidence than its predecessor, the Medical Device Directive (MDD)β€Šβ€”β€Šcame into effect earlier thisΒ year.

The bad news? Staying compliant with MDR has meant a great deal more time, expertise, and expense for most companies than anything they’ve dealt with previously.

This resource strain is due primarily to the time-consuming task of finding the correct data to satisfy European MEDDEV (Medical Devices Documents) guidelines, which govern the creation of clinical evaluation reports (CERs). To stay compliant and receive a CE Mark for distribution inside the EU, medical device makers must submit a CERβ€Šβ€”β€Šan unbiased, clinical device assessment based on published and unpublished internal literatureβ€Šβ€”β€Šand keep it up to date. All CERs must demonstrate conformity with the Essential Requirements for safety and performance in MEDDEV 2.7/1 Rev. 4 AnnexΒ 1.

It all adds up to an incredible amount of workβ€Šβ€”β€Šespecially considering MDR’s new requirements around device classification, technical documentation, postmarket surveillance, and otherΒ areas.

AI and natural language processing (NLP) technologies are powerful tools that can help medical device manufacturers create and keep comprehensive CERs up to date. But how,Β exactly?

Let’s findΒ out.

What makes aΒ CER?

At their core, CERs are extremely detailed benefit/risk assessments. They ensure the benefits of each medical device outweigh any potential downsides and can include data related to the specific device or data about equivalent devices.

CER production typically falls into five distinctΒ stages:

  • Stage 0: Scope definition andΒ planning
  • Stage 1: Identification of pertinent data
  • Stage 2: Appraisal of pertinent data
  • Stage 3: Analysis of clinicalΒ data
  • Stage 4: Finalization of Clinical Evaluation Report

CERs get submitted to notified bodies (NBs) designated under MDR, who weigh questions such as: Will the device perform as intended? Will it be safe? Is it superior to alternative methods of care? After evaluating these and other queries, the NB then determines whether the device can be sold in Europe or if it requires additional clinical data before approval.

Identifying pertinent data inΒ CERs

To complete stages 1 and 2 above, CERs require a great deal of favorable and unfavorable data around the device sourced internally and externally (via databases such as PUBMED, EMBASE, Cochrane, and Google Scholar).

Companies must submit a complete evaluation of all technical features and testing results of the device as pertaining to both safety and performance, including:

  • Preclinical testing data used for verification and validation (including bench and/or animalΒ testing)
  • Any device failures during testing or otherwise
  • Manufacturer-generated clinical data suchΒ as:

a) Premarket clinical investigations

b) Postmarket clinicalΒ data

c) Complaint reports

d) Explanted device evaluations

e) Field safety corrective actions

Instructions forΒ use

Other crucial data points for inclusion surround the usability of the medical device, determined through human factor studies, and a review of the state of the art (SOTA) of the relevant clinicalΒ field.

Data collection for CERs: Common challenges andΒ mistakes

The process of identifying pertinent data can, however, be misleading at bestβ€Šβ€”β€Šand disastrous, at worstβ€Šβ€”β€Šif incorrectly performed. Indeed, the results of manual literature searches can fluctuate wildly due to the competence of search professionals combined with several common errors and challenges, as detailed in a 2017 study by Indani EtΒ al.

Typical data inclusion errors

Indani Et al. detail several common mistakes that can delay or derail any scientific literature search, including:

  1. Errors of inclusion (i.e. too much data). The use of vague or general search terms, unspecialized medical databases, and faulty boolean logic can lead to an overwhelming amount of specific and non-specific information, along with large amounts of semi-relevant, irrelevant, or duplicate data.
  2. Errors of exclusion (i.e. too little data). When search terms are way too specific, or if excluding Boolean logic (such as the β€œAND” operator) is overused, it can lead to the opposite problemβ€Šβ€”β€Ša dearth of usable data that doesn’t include all relevant information.
  3. Errors of inclusive exclusions. These errors include keyword and data selection bias from those conducting the literature search based on the desiredΒ outcome.
  4. Errors of exclusive inclusions. These errors limit results due to the use of extremely exclusive search terms, including not accounting for local dialect and spellings or excluding common synonyms.
  5. Errors of exclusive exclusions (limited relevance). When a search is conducted with bias and too much specificity, leading to a one-sided dataset that doesn’t have enough information.

These challenges and their resulting errors can range in severity, from CER production delays to outright rejection by NBs designated underΒ MDR.

NLP is your secret weapon for CER literature search

For these and other reasons, researchers say automation via AI and NLP is key for improving the efficiency, cost-effectiveness, speed, and accuracy of CER research. β€œArtificial intelligence and natural language processing based tools with cognitive capabilities provide a near-perfect solution for literature search,” says Indani EtΒ al.

The researchers also say NLP-based automation reduces bias, shrinks the need for multiple searches through algorithm reuse and customization, allows for auto-translation of sources in other languages, and can speed up the process of literature selection and extraction by orders of magnitude, β€œsignificantly decrease(ing) the time taken for manual search and filtering appropriate content.

β€œAutomation will eventually reduce the cost, time, and resources (required) in the whole process,” they say. β€œThe combination of a good literature search tool and a trained literature search professional can be the best solution to avoid errors and limitation of literature search.”

CapeStart can improve your CER literature search

CapeStart’s NLP automation, subject matter experts, and smart workflows provide a repeatable, transparent process that makes it easier and cheaper to produce and submit CERs, keep them up to date, and standardize them across all your products and business units. Our NLP models learn from reviewer inputs to continually assess and re-order scientific literature based on relevanceβ€Šβ€”β€Šsignificantly improving accuracy, efficiency, and speed of literature reviews over manual approaches.

That means medical device makers can produce audit-ready, compliant CERsβ€Šβ€”β€Šsometimes in less than half the time of a manual approachβ€Šβ€”β€Špowered by ready-to-go form templates, automated quality assurance, duplicate identification, and identification of common errors such as inclusive exclusions or exclusive inclusions.


Getting the Right Data for Clinical Evaluation Reports: An AI-Powered Approach was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓