Artificial Intelligence (AI) is bringing new opportunities to healthcare, from enhancing diagnostics to advancing medical research. However, these advancements also present unique regulatory challenges. As Dr. Robert M. Califf, Commissioner of the U.S. Food and Drug Administration (FDA), observed, “AI is more than a tool; it’s a paradigm shift. But a shift with inherent risks requires a careful balance of innovation and regulation.” To address this, the FDA is adjusting its regulatory approach, focusing on a flexible, life-cycle model that emphasizes ongoing oversight. In this blog, we explore the FDA’s evolving role in regulation of artificial intelligence and its efforts to ensure safety and efficacy without stifling innovation.
FDA Perspective on the Regulation of Artificial Intelligence in Health Care
The FDA’s journey with AI isn’t new, but recent advances have required accelerated adaptation. Beginning with its approval of PAPNET in 1995, an early AI-powered diagnostic tool for cervical cancer detection, the FDA has evolved its regulatory stance as AI applications have diversified. While early AI applications were narrow in scope, today’s AI models encompass a vast array of functions and levels of complexity—from simple algorithms in diagnostics to large-scale generative models with potential for profound, sometimes unpredictable impacts. This shift necessitates that the FDA develop flexible, adaptive regulatory mechanisms to ensure safety without stifling innovation.
The FDA’s regulatory role primarily focuses on ensuring AI systems are safe and effective for their intended use. However, AI technologies evolve rapidly, which can make traditional regulatory frameworks inadequate. The FDA’s solution? A risk-based, life-cycle-focused approach that emphasizes postmarket monitoring and a commitment to adapting regulations to keep pace with AI advances. A life-cycle approach is necessary because AI solutions are not static. They are continually learning and adapting. To develop an AI solution and not monitor it for drift, a change in performance over time is irresponsible.
Dr. Haider J. Warraich and Troy Tazbaz from the FDA, alongside Commissioner Dr. Califf, provide valuable perspectives on the agency’s regulatory priorities in a recent article in JAMA titled, FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. Warraich and Tazbaz bring expertise from clinical and technical realms, guiding the FDA’s stance on AI tools used in diagnostics and patient monitoring, emphasizing a need for collaboration across regulatory agencies and industries to harmonize global standards.
According to Dr. Califf, the FDA’s ultimate goal in regulation of artificial intelligence is a comprehensive framework to ensure that AI technologies not only meet safety and efficacy benchmarks but also respect patient rights, ethical standards, and equitable access to care. This vision aligns with international cooperation to create unified standards for safe AI use worldwide.
Keeping Pace with AI’s Evolution in Medical Devices
One of the FDA’s greatest challenges is keeping pace with the rapid rate of AI advancements. Unlike static medical devices, AI technologies are dynamic, evolving based on new data inputs and usage contexts. To address this, the FDA advocates a “total product life cycle” approach and predetermined change control plans. This strategy, piloted through programs like the Software Precertification Pilot Program, aims to enable quicker approvals while maintaining rigorous standards. Ultimately, change at a government level is required because the FDA didn’t feel it had the statutory authority to continue with the precertification program.
The need for industry self-regulation is also noted because the FDA doesn’t have the funding to keep up with the sheer volume of solutions coming to market. Also, many AI solutions must be tested at the particular institution where they are going to be used to make sure they work with local data, so industries have to take on some responsibility for testing and ensuring systems work and no harm is done to patients.
More: Where is Digital Health (not just AI) Going in 2024?
Large Language Models (LLMs) and Generative AI
The rise of generative AI introduces complex risks due to their potential for “hallucinations” (generating plausible but incorrect information) and evolving outputs. As of 2024, the FDA has yet to approve any LLM for direct clinical use, but applications are emerging for generating clinical summaries, decision-support tools, and patient education. While not discussed in the article, whether or not LLMs need to be approved for medical use by the FDA is a complex topic. FDA oversight applies when a vendor advertises or promotes a solution for the diagnosis and treatment of a medical condition. If a doctor of their own volition chooses to use it, that is their decision. Elon Mush recently solicited X users to upload medical images to see what Grok could figure out from the image. Could it make an assessment, whether right or wrong, per Musk’s post? Many have complained that this is a HIPAA violation or an improper use under FDA purview. Neither opinion is correct. As a patient, I can do what I want with my medical image. My provider couldn’t upload my image because of HIPAA, but I, as a patient, can. Also, he isn’t, at least for now, proposing a medical diagnosis and steering clear of the FDA. It could be argued this is a gray area.
Given the challenges LLMs present in maintaining reliability, the FDA advocates for a cautious approach. Implementing specialized evaluation tools and independent validation pathways for LLMs may become critical as their use expands. Ensuring accuracy, interpretability, and oversight will be paramount, especially in high-risk specialties like cardiology or oncology, where erroneous outputs could have significant implications for patient care. Above all, doctors and healthcare professionals need to use their clinical judgment.
Aligning Financial Incentives
While the FDA enforces safety and efficacy, regulated industries are ultimately responsible for their products’ ethical use. Many AI applications can optimize operations and improve financial efficiency, but these incentives must not overshadow patient care. The FDA highlights the need for balanced models that align financial incentives with patient health outcomes.
With financial pressures often conflicting with clinical goals, the FDA urges health systems to prioritize models that improve patient outcomes, not just operational efficiency. Striking this balance will require collaboration between regulators, developers, and healthcare providers to ensure AI’s potential is realized in a patient-centered way.
Final Thoughts
AI in health care has the potential to reshape the industry, but only if implemented responsibly. This transformative journey requires a steady regulatory hand. The FDA’s role extends beyond approving individual products to fostering an ecosystem where safety, transparency, and ongoing monitoring remain paramount. As AI applications become more integrated into clinical settings, the FDA’s life cycle-focused approach will be crucial in preserving public trust and ensuring that AI technologies fulfill their promise of improving patient outcomes. By continuously refining its regulation of artificial intelligence, the FDA is setting a high standard, aiming not only to advance AI technology but also to safeguard the health and well-being of patients across the U.S. and beyond.