Introduction of Artificial Intelligence and Machine Learning in Medical Devices

The Canadian Institutes of Health Research in collaboration with Health Canada

Objectives

On February 22, 2019 CIHR hosted a Best Brains Exchange (BBE) on the topic of “Introduction of Artificial Intelligence and Machine Learning in Medical Devices” in collaboration with Health Canada.

The BBE was designed to address the following policy questions:

  1. How are researchers currently approaching algorithm development and programming for AI and ML as it may apply to medical devices?   
  2. How should software that incorporates unsupervised algorithms or supervised algorithms that continuously learn be verified and validated and/or limited to an acceptably safe state?
  3. What knowledge would Health Canada need to start developing so that they can effectively regulate medical devices that incorporate AI and ML today and in the near future?

Policy context

The Medical Device Bureau (MDB) at Health Canada is responsible for the market authorization of about 35,000 medium and high-risk devices, and makes about 13,000 regulatory decisions annually. The medical device industry dynamic is increasingly propelled by technological advancements, which can reduce the innovation cycles of some devices to less than 18 months.  The emergence of Artificial Intelligence (AI), including Machine Learning (ML), has identified a challenging new front for the regulation of medical devices.  New algorithms are being developed, where neither the software nor the software developers can explain how decisions are being made.  These algorithms, some of which will have outputs that cannot be explained or justified, introduce new challenges to the classic approach to regulating medical devices and their software. However, the performance of these non-deterministic devices may prove to be superior to current medical device diagnostics and therapies. In order to provide Canadians with timely and safe access to these kinds of devices, Health Canada needs to develop policy around the regulation of AI and ML.

The Minister of Health’s Mandate letter contains references to increased home care and better use of digital health technologies to help Canadians maintain and improve their health. Health Canada and program partners such as the national association representing the medical technology industry in Canada (MEDEC), the Canadian Agency for Drugs and Technologies in Health (CADTH), and the Council of Canadian Innovators (CCI) have highlighted the need for further discussions and concrete policies about the effective regulation of software that includes artificial intelligence and machine learning.

The MDB is currently leading an initiative that aims to improve access to digital health technologies and in doing so, has recently created a Digital Health Division with eight staff members dedicated to this field.  The goal of this new division is to improve outcomes for patients by adapting to rapidly changing technologies in digital health, responding to fast innovation cycles, and supporting the review of digital health technologies in the pre-market review phase to facilitate their market access.  One of the key and challenging areas identified within the scope of digital health is that of AI and ML.

Identified need for evidence

Research will better inform our understanding of the next generation of medical technology that incorporates elements of AI and ML programming.  Health Canada may need to alter its approach when considering the delivery of its regulatory services.  New legislation may need to be developed to accommodate the unique requirements to verify and validate AI systems.  Instead of a full pre-market evaluation of a specific software version, the non-deterministic nature of an AI system may require some form of qualification as safe and effective over a given period of the market lifetime of the device, in order to gain access to the Canadian marketplace.

Anticipated outcomes

Anticipated outcomes from the BBE include:

  • A report that captures the evidence presented as well as the discussions that take place.
  • Strengthened engagement with stakeholders in this rapidly evolving space.

The discussions will serve as a basis to:

  • Define regulatory requirements in Canada for this new evolving field.
  • Direct future guidance for medical device manufacturers employing AI and ML algorithms.
  • Consider the possible adaptation of Canada’s regulatory framework for medical devices employing AI and ML to meet the needs of the people of Canada, while also fulfilling Health Canada’s role to assess safety and effectiveness of medical devices.  

Presentation summaries

The meeting was facilitated by David Boudreau (Associate Director, Office of Legislative and Regulatory Modernization, Policy, Planning and International Affairs Directorate, Health Products and Food Branch, Health Canada) and David Buckeridge (Professor, McGill University) Below is a summary of the evidence presented by each of the invited speakers:

Towards Patient-Specific Treatment: Medical Applications of Machine Learning

Russ Greiner, B.Sc., M.Sc., Ph.D. Professor, Alberta Medicine Intelligence Institute, University of Alberta

Patient-specific treatment requires determining which treatment has best chance of success for an individual patient, based on all available information. As this typically depends on many patient characteristics, finding a single biomarker is often not sufficient; nor is it enough to find the set of top biomarkers, as the best treatment depends on how multiple factors collectively relate to the outcome, perhaps combined using a classifier such as a decision tree. In many situations, these "best treatment" classifiers are not known initially. Fortunately, there is often a corpus of historical data, which includes both descriptions of previous patients, as well as the treatment outcomes. The field of Machine Learning (ML) provides tools to help here -- tools that can "learn" which treatment is most effective for a given patient, based on his/her specific symptoms.

This presentation introduced the relevant ideas, using real-world medical examples -- starting with a way to help predict which breast cancer patients are likely to suffer a relapse, based on the subcellular location of certain adhesion proteins, as well as the standard clinical features. We use this to show the difference between standard association studies (designed to find biomarkers) and this machine learning methodology.  We then demonstrate that this methodology can be used for a wide variety of other medical tasks, and if time permits, discuss some foundational topics.

AI & Machine Learning in healthcare: developmental and validation hurdles

Hugh Harvey, MBBS, BSc (Hons), FRCR MD (Res) Clinical Director, Kheiron Medical Technologies and Associate Editor, Nature: Digital Medicine

This talk covered the developmental and validation hurdles involved with applying deep learning to healthcare, with a focus on deep learning as applied to image interpretation.

Discussion points included the data readiness stages, the basic statistics of how to measure output accuracy and explainability, and phases of clinical algorithmic trials, with some applied examples from recent literature.

Regulations and standards for AI in healthcare

Frank Rudzicz, Ph.D., M.Eng. Scientist, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Associate Professor, Computer Science, University of Toronto, Director of Artificial Intelligence, Surgical Safety Technologies Inc., and Faculty Member, Vector Institute for Artificial Intelligence

In this talk, we covered three main topics: the state of healthcare regulation today, a few challenges with safety in AI, and possible solutions. There are many challenges, including the prevalence of medical error, and outdated means to characterize software that can change its behaviour. Some new guidance from the FDA and a few examples of products using AI indicates a willingness to adapt to the new technology.  However, the new technology is not fully understood, and even very accurate systems can hide unexpected or unknowable biases or behaviours internally. Exposing these unintended aspects should include an increased emphasis on ‘explainable AI’ and international technical standards.

Recommended readings

Date modified: