FDA and Global Partners Set Principles for AI in Medical Devices

The FDA has clarified its thinking through guidance documents and standards as it regulates a growing number of medical devices with an AI or machine learning component. In 2021, the agency collaborated with Health Canada and the U.K.’s Medicines and Healthcare products Regulatory Agency to set out guiding principles for good machine learning practice. Last week, the agencies shared guiding principles on transparency, such as providing users with information on how an AI model came up with a result.

In 2022, the FDA clarified what clinical decision support tools must be regulated by the agency as medical devices, noting that tools that predict the risk of sepsis or stroke should be under its purview. Last year, the agency issued a draft guidance on predetermined change control plans that would allow developers to make changes to an AI model after it is marketed, within bounds agreed upon ahead of time by the FDA.
The FDA is also co-leading a working group with the International Medical Device Regulators Forum on AI/ML-enabled medical devices.
AI has the potential to significantly improve patient care and medical professional satisfaction, advance research in medical device development, and enable personalized treatments, CDRH’s Tazbaz wrote.
“At the FDA, we know that appropriate integration of AI across the health care ecosystem will be paramount to achieving its potential while reducing risks and challenges,” he added.
The FDA’s Digital Health Center of Excellence wants to ensure AI technologies, when used as medical devices, are safe and effective, and foster a collaborative approach around AI in healthcare.
One way of reducing risk is by adopting standards and best practices for the AI development lifecycle, Tazbaz wrote. For example, that approach would involve ensuring data suitability, collection and quality match the intent and risk profile of the AI model being trained.
The healthcare community could also agree on common methodologies to provide information to users — including patients — on how a model was trained, deployed and managed.
Tazbaz also outlined how the FDA is thinking about quality assurance for AI in medical devices, adding that the agency plans to issue future publications to add to the discussion. Those papers will address standards and best practices, quality assurance laboratories, transparency and accountability, and risk management.

SHARE
TWEET
SHARE
PIN
Receive the latest news

Subscribe To Our Newsletter

Get notified about new articles